The UpStream

YouTube Changes Policies for Kids Videos

posted Saturday Nov 25, 2017 by Scott Ertz

YouTube Changes Policies for Kids Videos

YouTube has once again come under scrutiny for its automation processes. Just a few months ago, the company's advertising automation caused a massive advertiser boycott, including large companies such as AT&T, Enterprise Rent-A-Car, Johnson & Johnson and Verizon. Now they are having trouble with parents who have noticed some incredibly inappropriate content appearing in the YouTube Kids platform.

The company's algorithms began allowing adult-oriented content to appear in the family-friendly Kids platform. For example, Mickey Mouse in a pool of blood, or a Claymation animation of Spider-Man urinating on Frozen's Elsa. There has also been an influx of sexual comments on videos of children, particularly on newer "challenge" videos, such as the yoga challenge or the ice pants challenge. Buzzfeed recently revealed a number of videos in the platform showing children being abused or in vulnerable situations.

Because of these issues, YouTube has put new policies in place to try and eliminate, or at least slow, the problem. The highlights of the new policies are,

  • Tougher application of community guidelines and faster enforcement through technology
  • Removing ads from inappropriate videos targeting families
  • Blocking inappropriate comments on videos featuring minors
  • Providing guidance for creators who make family-friendly content
  • Engaging and learning from experts

On the surface, these measures sound reasonable, but there are a few issues. Chief among them, the definitions of "inappropriate" and "targeting families" are incredibly vague. The company's only example is a 5-month-old guidance on using family-oriented characters in violent or sexual situations. However, the terminology could be applied to content, such as videogame videos, that include a player swearing. For example, if you have ever watched a Super Mario Maker speed-runner, you know that they can get verbally abusive to the game. Even though the videos are not aimed at children or families, this definition could demonetize said videos for no real reason.

If YouTube really wants to deal with the content on YouTube Kids, they should implement an opt-in on video uploads. Then, the algorithms could scan that content to ensure it meets the regulations, rather than assuming that every video could be a potential match for the service. For example, we are aware that not all of our shows would be entirely appropriate for kids, so we would not opt-in on those episodes. This would, of course, limit the number of videos in the service, so adversely, they could allow an opt-out for those videos that could get caught in the filter.

What Does the DOJ Mean with 'Responsible Encryption'

posted Saturday Nov 25, 2017 by Scott Ertz

What Does the DOJ Mean with 'Responsible Encryption'

Since the 90s, there has been a struggle between the tech industry and the government over the implementation of encryption. The tech industry has always argued that people deserve and need the ability to protect their data. For example, corporations might want to protect their financial data, or information about new product development. Individuals also have the right to protect themselves from theft. Encrypting photos could have helped protect Apple users during the iCloud hack.

On the other side of the argument is the government, which believes that encryption should be breachable by them. For over 2 decades, law enforcement has claimed that strong encryption makes data warrant-proof, and they need a way to be able to deal with that inevitability. The Department of Justice has petitioned time and again for what Deputy Attorney General Rod Rosenstein is currently calling "responsible encryption," which essentially means that they expect data companies to provide a backdoor through which they can sneak into your data.

Tech companies have refused to comply with this request, sometimes taking the issue to court. Apple famously fought an order to unlock an iPhone owned by one of the San Bernardino attackers. They claimed that, once a backdoor was created, it would exist and create an ever-present threat. This is the same argument made against backdoors in standard encryption processes.

During a discussion at Ars Live, Riana Pfefferkorn, a legal fellow at the Stanford Center for Internet and Society, made a the point that the term responsibility is misleading.

I think what Rosenstein is getting at is that he believes that companies in their deployment of encryption should be responsible to law enforcement above all and public safety rather than being responsible to their users or the broader security ecosystem.

She believes that the topic has come up again now because the DOJ smells "blood in the water" after the perceived mishandling of data regarding Russia during the presidential election, despite the fact that the two topics are unrelated. But the DOJ's arguments and tactics have not really changed in over 2 decades. In 2015, a research paper was released, showing that nothing has changed on either side.

The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard-to-detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

From a software standpoint, the important argument here is that a backdoor is a backdoor; once it exists, it exists. There is no way to secure a backdoor, because people are people. A key will leak, because there are a lot of people involved: developers, law enforcement, government officials. If the breach exists, it will be exploited immediately, either by someone involved, a hacker or law enforcement without a warrant. No good can possibly come from making encryption insecure.

Microsoft Brings Mixer Support to Minecraft

posted Saturday Nov 25, 2017 by Scott Ertz

Microsoft Brings Mixer Support to Minecraft

When Microsoft bought Twitch competitor Beam in 2016, it brought with it the promise of more social streaming features to its biggest game franchises. Arguably their biggest franchise has just received those capabilities on the rebranded Microsoft Mixer - Minecraft. While the alliteration might be enough for some, the new features are even more exciting.

As with other games that implement Mixer directly into their games, players can stream directly from within the game without any need for additional software. This allows for the game to implement the features that sets Mixer apart from its competitors: social control. Viewers have the ability to control the player's environment, spawn obstacles and even enemies.

In terms of Minecraft, viewers could potentially have the ability to change night to day, add elements to the player's environment, or even spawn monsters to challenge the player. This makes the process more automated and far less manual, like it would be on Twitch or YouTube Gaming, where viewers can only flood the chatroom with comments, hoping that enough of them will cause the player to trigger the event themselves.

For the player, however, there are ways to put limitations on what the viewers can do. If you don't want to allow viewers to add items to your landscape, you can turn it off. If you want to limit the ability to add enemies, you can. This gives control to both the streamer and viewer in tandem.

These new features are available now for Windows 10, Xbox One and Android version 1.2.5.

Uber and the Terrible, Horrible, No Good, Very Bad Week

posted Saturday Nov 25, 2017 by Scott Ertz

Uber and the Terrible, Horrible, No Good, Very Bad Week

From time to time, a company has a really bad week. Not just one, but multiple incidents happen is succession that cause a lot of trouble. Sometimes companies weather these weeks just fine, and sometimes they take a major toll on the company's image for a long time. This week might have been the latter for Uber, with 2 major legal blows within very short order.

Data Breach

Uber revealed this week that they have been on the receiving end of a massive data breach, but they are not the most recent. With 57 million accounts violated, for both drivers and riders, it's the kind of breach that requires immediate attention, and immediate notification of those affected. Notification allows those affected to ensure that their passwords are safe, their credit cards are not being used, etc.

Unfortunately, Uber decided to handle the breach in a very different way. Disclosed this week, the company announced that the breach occurred in 2016, but the information was never disclosed. Instead, ousted former CEO Travis Kalanick decided to pay the hackers $100,000 for the promise that they would delete the data. That isn't exactly how hackers work, though, so you're still going to want to verify that your information is safe.

Current CEO Dara Khosrowshahi discovered the issue and was surprised to find out that there was a breach that was never disclosed. He immediately set about to see how the company handled it, and was not happy. In his public statement, he said,

You may be asking why we are just talking about this now, a year later. I had the same question, so I immediately asked for a thorough investigation of what happened and how we handled it.

In response to his findings, he said that two security employees were no longer with the company, including Chief Security Officer Joe Sullivan. He continued, saying,

None of this should have happened, and I will not make excuses for it. While I can't erase the past, I can commit on behalf of every Uber employee that we will learn from our mistakes. We are changing the way we do business, putting integrity at the core of every decision we make and working hard to earn the trust of our customers.

Bad Background Checks

This particular one is not new for the company. Uber has had several run-ins with riders and attorneys claiming that background checks have been incomplete, inaccurate or, in one case, not run. There was even a period of time where Uber had neglected to make any decisions based on those background checks, allowing drivers with violent pasts, DWI arrests and even no driver's license to drive under the company's brand.

This week, another batch of bad drivers has been revealed by the Colorado Public Utilities Commission. After a driver assaulted a rider in Vail, the commission opened up an investigation in to the company's business practices and announced that 57 drivers had been allowed to drive for the company that should not have been. According to the report,

PUC staff found that Uber allowed individuals to drive with previous felony convictions, major moving violations (DUI, DWI, reckless driving, driving under restraint), and numerous instances of individuals driving with suspended, revoked or cancelled driver's licenses.

One of the drivers in question was even an escaped convict. All of these issues would obviously come out in even the least detailed of background checks. Half of them can be discovered simply by reading a local newspaper. Because of the obvious oversight, or possibly purposeful ignorance, the state has fined Uber $8.9 million.

According to Stephanie Sedlak, a spokesperson for Uber,

We recently discovered a process error that was inconsistent with Colorado's ridesharing regulations and proactively notified the Colorado Public Utilities Commission (CPUC).

This error affected a small number of drivers and we immediately took corrective action. Per Uber safety policies and Colorado state regulations, drivers with access to the Uber app must undergo a nationally accredited third-party background screening. We will continue to work closely with the CPUC to enable access to safe, reliable transportation options for all Coloradans.

The strangest part of this statement is the suggestion that, if it weren't for Colorado's regulations, Uber would have had no issue with letting these drivers continue. That does not instill a lot of confidence in the company's morals or safety processes. It would seem that there are certain universal truths that would fail a potential driver from contention, and that would include escaped convicts, violent felons and those who are legally not permitted to drive any vehicle.

Amazon Originals to Bring Lord of the Rings to Television

posted Sunday Nov 19, 2017 by Scott Ertz

Amazon Originals to Bring Lord of the Rings to Television

Over the last 15 years, one of the most successful movie franchises has been The Lord of the Rings. In that time, Peter Jackson has created 6 films - 3 from the original trilogy and 3 from The Hobbit. Between these film series, however, there is a veritable treasure trove of additional content. In particular, The Silmarillion.

Enter Amazon Studio, who has paid $250 million to purchase the rights to produce a television series in the Lord of the Rings universe. The series is not certainly being produced from the overwhelmingly large collection of content about Middle Earth contained within The Silmarillion, but it is a good guess. It is the largest repository of canon content set between the two major stories, and Amazon has confirmed that the series will take place in the same gap.

Like other Amazon Originals, it will be available to Amazon Prime Members, though an arrival date is not yet available. Based on the massive cost of the project, both monetarily and in time, as well as the early stage of production, it would not be out of the question for the first season to premiere in late 2019.

Even before production starts, however, Amazon has already made a multi-season commitment to the project. That means that, no matter how well the first season does, we are guaranteed at least a second season. If the show does as well as Amazon is obviously expecting, there is also talk of a spin-off series, which could bring the focus of the story down a new path, but there are a lot of variables before that could be a possibility.

As a long-term fan of Middle Earth, I am both excited for and worried about this project. There is a lot of possibility for success, but it would not be the first time a Middle Earth series was a disaster. The estate was so embarrassed by the last project, we almost didn't get the Peter Jackson series at all. I will be watching this production with anticipation and trepidation.

DJI Threatens Legal Action Over Embarrassing Bug Bounty Report

posted Sunday Nov 19, 2017 by Scott Ertz

Over the past few years, the idea of a "bug bounty program" has grown quickly. Microsoft, Apple and Google all offer money for finding issues in their software, but smaller companies have taken to introducing similar programs. Unfortunately, most companies have not managed them in a detailed or responsible manner. Case in point, DJI, manufacturer of the Phantom quadcopter drone line. The company released their program in August, but never really explained what might be included. Some companies look for firmware issues, while others encourage server research.

Kevin Finisterre decided he would reach out to the company, looking for details on the program. After some back-and-forth, it was made clear that server issues were included in the program. So, Finisterre set out to find issues in what is becoming an increasingly dangerous place for security breach data: GitHub. As expected, Finisterre was able to find SSL certificate information, as well as public and private keys for Amazon Web Services.

After communicating his findings, which were detailed and extensive, with the company, he was offered a job consulting on security. That was, until the legal department got involved, and the entire tone of the conversation changed. Instead of a job, the company offered legal action against him for hacking. They even sent over a contract that was insulting at best. It required him to be silent on the topic, and promised no protection from legal action for finding the data in his report. He said of the interaction,

In the days following no less than 4 lawyers told me in various ways that the agreement was not only extremely risky, but was likely crafted in bad faith to silence anyone that signed it. I went through various iterations to get the letter corrected. It was ultimately going to cost me several thousand dollars for a lawyer that I was confident could cover all angles to put my concerns to bed and make the agreement sign-able.

After refusing to sign the contract and turning down a $30k bounty, Finisterre instead published his findings and his interactions with the company. The company, on the other hand, began a smear campaign against Finisterre, publishing a statement calling him a "hacker" and diminishing his findings.

DJI is investigating the reported unauthorized access of one of DJI's servers containing personal information submitted by our users. As part of its commitment to customers' data security, DJI engaged an independent cyber security firm to investigate this report and the impact of any unauthorized access to that data. Today, a hacker who obtained some of this data posted online his confidential communications with DJI employees about his attempts to claim a "bug bounty" from the DJI Security Response Center.

DJI implemented its Security Response Center to encourage independent security researchers to responsibly report potential vulnerabilities. DJI asks researchers to follow standard terms for bug bounty programs, which are designed to protect confidential data and allow time for analysis and resolution of a vulnerability before it is publicly disclosed. The hacker in question refused to agree to these terms, despite DJI's continued attempts to negotiate with him, and threatened DJI if his terms were not met.

This interaction underscores several issues plaguing the software industry. First is the open sourcing of software by irresponsible developers. When developers don't know the proper process for making code public, things go wrong, such as releasing database connection strings, cloud keys and more. This can make very private information, such as drivers licenses and passports in this case, available to the public.

The second issue is poorly implemented bounty programs. If a company does not have a detailed user guide for their program, it is easy for it to turn sour, especially when a bug or security issue embarrasses the company. A reward can turn into a lawsuit or, worse yet, criminal charges. This can ruin a developer or security expert's career in perpetuity.

We're live now - Join us!
PLuGHiTZ Keyz

Email

Password

Forgot password? Recover here.
Not a member? Register now.
Blog Meets Brand Stats