Tesla issues battery software update after Hong Kong vehicle fire

Tesla has started pushing out a software update that will change battery charge and thermal management settings in Model S sedans and Model X SUVs following a fire in a parked vehicle in Hong Kong earlier this week.

The software update, which Tesla says is being done out of “an abundance of caution,” is supposed to “protect the battery and improve its longevity.” The over-the-air software update will not be made to Model 3 vehicles.

Tesla has not yet identified the cause of the fire or found any issues with the battery pack. But the company said it will act if it discovers a problem.

“The safety of our customers is our top priority, and if we do identify an issue, we will do whatever is necessary to address it,” Tesla said in a statement.

Here is the company’s statement in its entirety on the software update:

We currently have well over half a million vehicles on the road, which is more than double the number that we had at the beginning of last year, and Tesla’s team of battery experts uses that data to thoroughly investigate incidents that occur and understand the root cause. Although fire incidents involving Tesla vehicles are already extremely rare and our cars are 10 times less likely to experience a fire than a gas car, we believe the right number of incidents to aspire to is zero.

As we continue our investigation of the root cause, out of an abundance of caution, we are revising charge and thermal management settings on Model S and Model X vehicles via an over-the-air software update that will begin rolling out today, to help further protect the battery and improve battery longevity.

A Tesla Model S caught on fire March 14 while parked near a Hong Kong shopping mall. The vehicle was sitting for about a half an hour before it burst into flames. Three explosions were seen on CCTV footage, Reuters and the Apple Daily newspaper reported at the time.

Tesla was onsite to offer support to our customer and establish the facts of this incident, a Tesla spokesperson said. The investigation is ongoing.

Only a few battery models were affected on the Model S that caught on fire and the majority of the battery pack is undamaged, according to Tesla.

The company noted that the battery packs are designed so that if “in the very rare instance” a fire does occur  it spread slowly and vents heat away from the cabin. The aim is to give occupants time to exit the vehicle.

The Hong Kong fire followed video footage posted in April that appears to show a Tesla Model S smoking and then exploding while parked in a garage in Shanghai.

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.

Google’s Translatotron converts one spoken language to another, no text involved

Every day we creep a little closer to Douglas Adams’ famous and prescient babel fish. A new research project from Google takes spoken sentences in one language and outputs spoken words in another — but unlike most translation techniques, it uses no intermediate text, working solely with the audio. This makes it quick, but more importantly lets it more easily reflect the cadence and tone of the speaker’s voice.

Translatotron, as the project is called, is the culmination of several years of related work, though it’s still very much an experiment. Google’s researchers, and others, have been looking into the possibility of direct speech-to-speech translation for years, but only recently have those efforts borne fruit worth harvesting.

Translating speech is usually done by breaking down the problem into smaller sequential ones: turning the source speech into text (speech-to-text, or STT), turning text in one language into text in another (machine translation), and then turning the resulting text back into speech (text-to-speech, or TTS). This works quite well, really, but it isn’t perfect; Each step has types of errors it is prone to, and these can compound one another.

Furthermore, it’s not really how multilingual people translate in their own heads, as testimony about their own thought processes suggests. How exactly it works is impossible to say with certainty, but few would say that they break down the text and visualize it changing to a new language, then read the new text. Human cognition is frequently a guide for how to advance machine learning algorithms.

Spectrograms of source and translated speech. The translation, let us admit, is not the best. But it sounds better!

To that end researchers began looking into converting spectrograms, detailed frequency breakdowns of audio, of speech in one language directly to spectrograms in another. This is a very different process from the three-step one, and has its own weaknesses, but it also has advantages.

One is that, while complex, it is essentially a single-step process rather than multi-step, which means, assuming you have enough processing power, Translatotron could work quicker. But more importantly for many, the process makes it easy to retain the character of the source voice, so the translation doesn’t come out robotically, but with the tone and cadence of the original sentence.

Naturally this has a huge impact on expression and someone who relies on translation or voice synthesis regularly will appreciate that not only what they say comes through, but how they say it. It’s hard to overstate how important this is for regular users of synthetic speech.

The accuracy of the translation, the researchers admit, is not as good as the traditional systems, which have had more time to hone their accuracy. But many of the resulting translations are (at least partially) quite good, and being able to include expression is too great an advantage to pass up. In the end, the team modestly describes their work as a starting point demonstrating the feasibility of the approach, though it’s easy to see that it is also a major step forward in an important domain.

The paper describing the new technique was published on Arxiv, and you can browse samples of speech, from source to traditional translation to Translatotron, at this page. Just be aware that these are not all selected for the quality of their translation, but serve more as examples of how the system retains expression while getting the gist of the meaning.

White House rejects calls to endorse the “Christchurch Call” to block extremist content online

The United States will not join other nations in endorsing the “Christchurch Call” — a global statement that commits governments and private companies to actions that would curb the distribution of violent and extremist content online.

“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call. We will continue to engage governments, industry, and civil society to counter terrorist content on the Internet,” the statement from the White House reads.

The “Christchurch Call” is a non-binding statement drafted by foreign ministers from New Zealand and France meant to push internet platforms to take stronger measures against the distribution of violent and extremist content. The initiative originated as an attempt to respond to the March killings of 51 Muslim worshippers in Christchruch and the subsequent spread of the video recording of the massacre and statements from the killer online.

By signing the pledge, companies agree to improve their moderation processes and share more information about the work they’re doing to prevent terrorist content from going viral. Meanwhile, government signatories are agreeing to provide more guidance through legislation that would ban toxic content from social networks.

Already, Twitter, Microsoft, Facebook, and Alphabet — the parent company of Google — have signed on to the pledge, along with the governments of France, Australia, Canada, and the United Kingdom.

The “Christchurch Call” is consistent with other steps that government agencies are taking to address how to manage the ways in which technology is tearing at the social fabric. Members of the Group of 7 are also meeting today to discuss broader regulatory measures designed to combat toxic combat, protect privacy, and ensure better oversight of technology companies.

For its part, the White House seems more concerned about the potential risks to free speech that could stem from any actions taken to staunch the flow extremist and violent on technology platforms.

“We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” the statement reads.”Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.”

Signatories are already taking steps to make it harder for graphic violence or hate speech to proliferate on their platforms.

Last night, Facebook introduced a one-strike policy that would ban users who violate its live-streaming policies after one infraction.

The Christchurch killings are only the latest example of how white supremacist hate groups and terrorist organizations have used online propaganda to create an epidemic of violence at a global scale. Indeed, the alleged shooter in last month’s attack on a synagogue in Poway, Calif., referenced the writings of the Christchurch killer in an explanation for his attack that he published online.

Critics are already taking shots at the White House for its inability to add the U.S. to a group of nations making a non-binding commitment to ensure that the global community can #BeBest online.

Google recalls its Bluetooth Titan Security Keys because of a security bug

Google today disclosed a security bug in its Bluetooth Titan Security Key that could allow an attacker in close physical proximity to circumvent the security the key is supposed to provide. The company says that the bug is due to a “misconfiguration in the Titan Security Keys’ Bluetooth pairing protocols” and that even the faulty keys still protect against phishing attacks. Still, the company is providing a free replacement key to all existing users.

The bug affects all Titan Bluetooth keys that have a “T1” or “T2” on the back.

To exploit the bug, an attacker would have to within Bluetooth range (about 30 feet) and act swiftly as you press the button on the key to activate it. The attackers can then use the misconfigured protocol to connect their own device to the key before your own device connects. With that — and assuming that they already have your username and password — they could sign into your account.

Google also notes that before you can use your key, it has to be paired to your device. An attacker could also potentially exploit this bug by using their own device and masquerading it as your security key to connect to your device when you press the button on the key. By doing this, the attackers can then change their device to look like a keyboard or mouse and remote control your laptop, for example.

All of this has to happen at the exact right time, though, and the attacker must already know your credentials. A persistent attacker could make that work, though.

Google argues that this issue doesn’t affect the Titan key’s main mission, which is to guard against phishing attacks, and argues that users should continue to use the keys until they get a replacement. “It is much safer to use the affected key instead of no key at all. Security keys are the strongest protection against phishing currently available,” the company writes in today’s announcement.

The company also offers a few tips for mitigating the potential security issues here.

Some of Google’s competitors in the security key space, including YubiCo, decided against using Bluetooth because of potential security issues and criticized Google for launching a Bluetooth key. “While Yubico previously initiated development of a BLE security key, and contributed to the BLE U2F standards work, we decided not to launch the product as it does not meet our standards for security, usability and durability,” YubiCo founder Stina Ehrensvard wrote when Google launched its Titan keys.

VMware acquires Bitnami to deliver packaged applications anywhere

VMware announced today that it’s acquiring Bitnami, the package application company that was a member of the Y Combinator Winter 2013 class. The companies didn’t share the purchase price.

With Bitnami, the company can now deliver more than 130 popular software packages in a variety of formats such as Docker containers or virtual machine, an approach that should be attractive for VMware as it makes its transformation to be more of a cloud services company.

“Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud — public or hybrid — and in the most optimal format — virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software,” the company wrote in a blog post announcing the deal.

Per usual, Bitnami’s founders see the exit through the prism of being able to build out the platform faster with the help of a much larger company. “Joining forces with VMware means that we will be able to both double-down on the breadth and depth of our current offering and bring Bitnami to even more clouds as well as accelerating our push into the enterprise,” the founders wrote in a blog post on the company website.

The company has raised a modest $1.1 million since its founding in 2011 and says that it has been profitable since early days when it took the funding. In the blog post, the company states that nothing will change for customers from their perspective.

“In a way, nothing is changing. We will continue to develop and maintain our application catalog across all the platforms we support and even expand to additional ones. Additionally, if you are a company using Bitnami in production, a lot of new opportunities just opened up.”

Time will tell whether that is the case, but it is likely that Bitnami will be able to expand its offerings as part of a larger organization like VMware.

VMware is a member of the Dell federation of products and came over as part of the massive $67 billion EMC deal in 2016. The company operates independently, is sold as a separate company on the stock market and makes its own acquisitions.

Microsoft open-sources a crucial algorithm behind its Bing Search services

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.

Daily Crunch: SF bans agencies from using facial recognition tech

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. San Francisco passes city government ban on facial recognition tech

The Stop Secret Surveillance Ordinance, introduced by San Francisco Supervisor Aaron Peskin, is the first ban of its kind for a major American city.

The ban would not impact facial recognition tech deployed by private companies, but it would affect any companies selling tech to city agencies, including the police department.

2. Uber Black launches Quiet Driver Mode

The “Quiet Mode” feature is free and available to everyone in the United States, but only on Uber Black and Uber Black SUV premium rides. Users can select “Quiet preferred,” “happy to chat” or leave the setting at “No preference.”

3. New secret-spilling flaw affects almost every Intel chip since 2011

Security researchers have found a new class of vulnerabilities in Intel chips which, if exploited, can be used to steal sensitive information directly from the processor.

4. Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live-streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March.

5. American Express is acquiring Resy

Resy launched back in 2014 as a platform that allowed users to buy reservations from restaurants in situations where they’d usually have to book months in advance. Over time, Resy realized the opportunity to provide software to restaurants.

6. Jeff Bezos personally dumps a truckload of dirt on FedEx’s future

Amazon broke ground yesterday on a three-million-square-foot Prime Air airport outside Cincinnati (in Kentucky).

7. CEO Howard Lerman on building a public company and the future of Yext

In our interview, Lerman passionately defended the idea that “a company is the ultimate vehicle in America to effect good in the world.” (Extra Crunch membership required.)

New ‘Black Mirror’ trailer features Miley Cyrus, Anthony Mackie… and more dystopia

“Black Mirror” is coming back for its fifth season to once again show us why technology’s progress means we can no longer have nice things.

The new season will tell three stories written by Charlie Brooker and Annabel Jones.

Featured performers include Anthony Mackie, Miley Cyrus, Yahya Abdul-Mateen II, Topher Grace, Damson Idris, Andrew Scott, Nicole Beharie, Pom Klementieff, Angourie Rice, Madison Davenport and Ludi Lin.

The last “Black Mirror” feature to appear on Netflix was the interactive epic “Bandersnatch”, which let viewers determine the fate of characters throughout the course of the story.

It was an experiment that could cost Netflix, thanks to a lawsuit from Chooseco, the company behind the “Choose your own adventure” series of books that inspired Black Mirror’s experiment in storytelling.

The fifth season likely marks a return to straight episodic narratives, with Cyrus featured in what “Variety” called a “meta storyline” about a celebrity who undergoes a transformation to attract more fans.

The new episodes will drop on Netflix on June 5.

Zoom, housing affordability, Mailchimp, Yext, and Uber

Conference call with CEO Eric Yuan of newly-IPOd Zoom

Since we first started Extra Crunch three months ago (my, time flies), we’ve been offering members live conference calls with our reporters. This week, we are trying something new and bringing a guest aboard.

TechCrunch’s SF-based startup and venture capital reporter Kate Clark is going to talk today with Eric Yuan, who founded video conferencing startup Zoom that just went public last month, making Yuan a very happy man.

Come armed with your questions or send them in to Arman Tabatabai. Instructions for joining the call will be mailed to members about an hour in advance, so check your inboxes.

Housing affordability market map

Dan Wu, a regtech and legaltech evangelist, published a great series of market maps on the housing affordability space this week on Extra Crunch, covering more than 200+ companies and organizations. He looks at spaces as diverse as property management, land acquisition, group developers, and new financial asset classes.