Microbiome testing service uBiome puts its co-founders on administrative leave after FBI raid

The microbiome testing service uBiome has placed its founders and co-chief executives, Jessica Richman and Zac Apte, on administrative leave following an FBI raid on the company’s offices last week.

The company’s board of directors have named John Rakow, currently the company’s general counsel, as its interim chairman and chief executive, the company said in a statement.

Directors of the company are also conducting an independent investigation into the company’s billing practices which is being overseen by a special committee of the board.

It was only last week that the FBI went to the company’s headquarters to search for documents related to an ongoing investigation. What’s at issue is the way that the company was billing insurers for the microbiome tests it was performing on customers.

“As interim CEO of uBiome, I want all of our stakeholders to know that we intend to cooperate fully with government authorities and private payors to satisfactorily resolve the questions that have been raised, and we will take any corrective actions that are needed to ensure we can become a stronger company better able to serve patients and healthcare providers,” Rakow said in a statement.

”My confidence is based on the significant clinical evidence and medical literature that demonstrates the utility and value of uBiome’s products as important tools for patients, health care providers and our commercial partners.” added Mr. Rakow.

It’s been a rough few weeks for consumer companies working on developing microbiome testing services and treatments based on those diagnosis. In addition to the FBI raid, the Seattle-based company, Arivale, was forced to shut down its “consumer program” after raising more than $50 million from investors, including Maveron, Polaris Partners and ARCH Venture Partners.

uBiome is backed by investors including Andreessen Horowitz, OS Fund, 8VC, Y Combinator, DNA Capital, Crunchfund, StartX, Kapor Capital, Starlight Ventures, and 500 Startups.

Microsoft extends its Cognitive Services with personalization service, handwriting recognition APIs and more

As part of its rather bizarre news dump before its flagship Build developer conference next week, Microsoft today announced a slew of new pre-built machine learning models for its Cognitive Services platform. These include an API for building personalization features, a form recognizer for automating data entry, a handwriting recognition API and an enhanced speech recognition service that focuses on transcribing conversations.

Maybe the most important of these new services is the Personalizer. There are few apps and web sites, after all, that aren’t looking to provide their users with personalized features. That’s difficult, in part, because it often involves building models based on data that sits in a variety of silos. With Personalizer, Microsoft is betting on reinforcement learning, a machine learning technique that doesn’t need the kind of labeled training data typically used in machine learning. Instead, the reinforcement agent constantly tries to find the best way to achieve a given goal based on what users do. Microsoft argues that it is the first company to offer a service like this and the company itself has been testing the services on its Xbox, where it saw a 40% increase in engagement with its content after it implemented this service.

The handwriting recognition API, or Ink Recognizer as it is officially called, can automatically recognize handwriting, common shapes and documents. That’s something Microsoft has long focused on as it developed its Windows 10 inking capabilities, so maybe it’s no surprise that it is now packaging this up as a cognitive service, too. Indeed, Microsoft Office 365 and Windows use exactly this service already, so we’re talking about a pretty robust system. With this new API, developers can now bring these same capabilities to their own applications, too.

Conversation Transcription does exactly what the name implies: it transcribes conversations and it’s part of Microsoft’s existing speech-to-text features in the Cognitive Services lineup. It can label different speakers, transcribe the conversation in real time and even handle crosstalk. It already integrates with Microsoft Teams and other meeting software.

Also new is the Form Recognizer, a new API that makes it easier to extract text and data from business forms and documents. This may not sound like a very exciting feature, but it solves a very common problem and the service needs only five samples to understand how to extract data and users don’t have to do any of the arduous manual labeling that’s often involved in building these systems.

Form Recognizer is also coming to cognitive services containers, which allow developers to take these models outside of Azure and to their edge devices. The same is true for the existing speech-to-text and text-to-speech services, as well as the existing anomaly detector.

In addition, the company also today announced that its Neural Text-to-Speech, Computer Vision Read and Text Analytics Named Entity Recognition APIs are now generally available.

Some of these existing services are also getting some feature updates, with the Neural Text-to-Speech service now supporting five voices, while the Computer Vision API can now understand more than 10,000 concepts, scenes and objects, together with 1 million celebrities, compared to 200,000 in a previous version (are there that many celebrities?).

Microsoft brings Plug and Play to IoT

Microsoft today announced that it wants to bring the ease of use of Plug and Play, which today allows you to plug virtually any peripheral into a Windows PC without having to worry about drivers, to IoT devices. Typically, getting an IoT device connected and up and running takes some work, even with modern deployment tools. The promise of IoT Plug and Play is that it will greatly simplify this process and do away with the hardware and software configuration steps that are still needed today.

As Azure corporate vice president Julia White writes in today’s announcement, “one of the biggest challenges in building IoT solutions is to connect millions of IoT devices to the cloud due to heterogeneous nature of devices today – such as different form factors, processing capabilities, operational system, memory and capabilities.” This, Microsoft argues, is holding back IoT adoption.

IoT Plug and Play, on the other hand, offers developers an open modeling language that will allow them to connect these devices to the cloud without having to write any code.

Microsoft can’t do this alone, though, since it needs the support of the hardware and software manufacturers in its IoT ecosystem, too. The company has already signed up a number of partners, including Askey, Brainium, Compal, Kyocera, STMicroelectronics, Thundercomm and VIA Technologies . The company says that dozens of devices are already Plug and Play-ready and potential users can find them in the Azure IoT Device Catalog.

Microsoft launches a fully managed blockchain service

Microsoft didn’t rush to bring blockchain technology to its Azure cloud computing platform, but over the course of the last year, it started to pick up the pace with the launch of its blockchain development kit and the Azure Blockchain Workbench. Today, ahead of its Build developer conference, it is going a step further by launching Azure Blockchain Services, a fully managed service that allows for the formation, management and governance of consortium blockchain networks.

We’re not talking cryptocurrencies here, though. This is an enterprise service that is meant to help businesses build applications on top of blockchain technology. It is integrated with Azure Active Directory and offers tools for adding new members, setting permissions and monitoring network health and activity.

The first support ledger is J.P. Morgan’s Quorum. “Because it’s built on the popular Ethereum protocol, which has the world’s largest blockchain developer community, Quorum is a natural choice,” Azure CTO Mark Russinovich writes in today’s announcement. “It integrates with a rich set of open-source tools while also supporting confidential transactions—something our enterprise customers require.” To launch this integration, Microsoft partnered closely with J.P. Morgan.

The managed service is only one part of this package, though. Microsoft also today launched an extension to Visual Studio Code to help developers create smart contracts. The extension allows Visual Studio Code users to create and compiled Etherium smart contracts and deploy them other on the public chain or on a consortium network in Azure Blockchain Service. The code is then managed by Azure DevOps.

Building applications for these smart contracts is also going to get easier thanks to integrations with Logic Apps and Flow, Microsoft’s two workflow integration services, as well as Azure Functions for event-driven development.

Microsoft, of course, isn’t the first of the big companies to get into this game. IBM, especially, made a big push for blockchain adoption in recent years and AWS, too, is now getting into the game after mostly ignoring this technology before. Indeed, AWS opened up its own managed blockchain service only two days ago.

Microsoft launches a drag-and-drop machine learning tool

Microsoft today announced three new services that all aim to simplify the process of machine learning. These range from a new interface for a tool that completely automates the process of creating models, to a new no-code visual interface for building, training and deploying models, all the way to hosted Jupyter-style notebooks for advanced users.

Getting started with machine learning is hard. Even to run the most basic of experiments take a good amount of expertise. All of these new tools great simplify this process by hiding away the code or giving those who want to write their own code a pre-configured platform for doing so.

The new interface for Azure’s automated machine learning tool makes creating a model as easy importing a data set and then telling the service which value to predict. Users don’t need to write a single line of code, while in the backend, this updated version now supports a number of new algorithms and optimizations that should result in more accurate models. While most of this is automated, Microsoft stresses that the service provides “complete transparency into algorithms, so developers and data scientists can manually override and control the process.”

For those who want a bit more control from the get-go, Microsoft also today launched a visual interface for its Azure Machine Learning service into preview that will allow developers to build, train and deploy machine learning models without having to touch any code.

This tool, the Azure Machine Learning visual interface looks suspiciously like the existing Azure ML Studio, Microsoft’s first stab at building a visual machine learning tool. Indeed, the two services look identical. The company never really pushed this service, though, and almost seemed to have forgotten about it despite that fact that it always seemed like a really useful tool for getting started with machine learning.

Microsoft says that this new version combines the best of Azure ML Studio with the Azure Machine Learning service. In practice, this means that while the interface is almost identical, the Azure Machine Learning visual interface extends what was possible with ML Studio by running on top of the Azure Machine Learning service and adding that services’ security, deployment and lifecycle management capabilities.

The service provides an easy interface for cleaning up your data, training models with the help of different algorithms, evaluating them and, finally, putting them into production.

While these first two services clearly target novices, the new hosted notebooks in Azure Machine Learning are clearly geared toward the more experiences machine learning practitioner. The notebooks come pre-packaged with support for the Azure Machine Learning Python SDK and run in what the company describes as a “secure, enterprise-ready environment.” While using these notebooks isn’t trivial either, this new feature allows developers to quickly get started without the hassle of setting up a new development environment with all the necessary cloud resources.

Microsoft announces the $3,500 HoloLens 2 Development Edition

As part of its rather odd Thursday afternoon pre-Build news dump, Microsoft today announced the HoloLens 2 Development Edition. The company announced the much-improved HoloLens 2 at MWC Barcelona earlier this year, but it’s not shipping to developers yet. Currently, the best release date we have is “later this year.” The Development Edition will launch alongside the regular HoloLens 2.

The Development Edition, which will retail for $3,500 to own outright or on a $99 per month installment plan, doesn’t feature any special hardware. Instead, it comes with $500 in Azure credits and 3-month trials of Unity Pro and the Unity PiXYZ plugin for bringing engineering renderings into Unity.

To get the Development Edition, potential buyers have to join the Microsoft Mixed Reality Developer Program and those who already pre-ordered the standard edition will be able to change their order later this year.

As far as HoloLens news goes, that’s all a bit underwhelming. Anybody can get free Azure credits, after all (though usually only $200) and free trials of Unity Pro are also readily available (though typically limited to 30 days).

Oddly, the regular HoloLens 2 was also supposed to cost $3,500. It’s unclear if the regular edition will now be somewhat cheaper, cost the same but come without the credits, or really why Microsoft isn’t doing this at all. Turning this into a special “Development Edition” feels more like a marketing gimmick than anything else, as well as an attempt to bring some of the futuristic glamour of the HoloLens visor to today’s announcements.

The folks at Unity are clearly excited, though. “Pairing HoloLens 2 with Unity’s real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways,” says Tim McDonough, GM of Industrial at Unity, in today’s announcement. “The addition of Unity Pro and PiXYZ Plugin to HoloLens 2 Development Edition gives businesses the immediate ability to create real-time 2D, 3D, VR, and AR interactive experiences while allowing for the importing and preparation of design data to create real-time experiences.”

Microsoft also today noted that Unreal Engine 4 support for HoloLens 2 will become available by the end of May.

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.

Spotify spotted testing ‘Your Daily Drive,’ a personalized playlist that includes podcasts

Spotify was recently spotted testing a new personalized playlist — the first one to mix music and podcasts, customized to user’s musical tastes. According to a report by The Verge, which saw the new playlist titled “Your Daily Drive” first-hand, the playlist features short episodes from podcasts following by a curated selection of music that fits your interests.

In the case of the test The Verge saw, however, the podcasts episodes were in Portuguese — an indication that this was not meant for public consumption at this time.

https://platform.twitter.com/widgets.js

However, the site wasn’t the only one to come across the test.

Several other users have also had the personalized playlist presented to them in recent days, including those based in the U.S. and abroad.

So far, the responses have been overwhelmingly positive. (See below).

https://platform.twitter.com/widgets.js

https://platform.twitter.com/widgets.js

https://platform.twitter.com/widgets.js

https://platform.twitter.com/widgets.js

https://platform.twitter.com/widgets.js

Spotify declined to offer any explanation or details about the playlist.

“We’re always testing new products and experiences, but have no further news to share at this time,” a spokesperson said.

The idea behind a playlist like this is interesting, as it would demonstrate Spotify’s ability to successfully bring its personalization technology to podcasts. This is something Pandora is doing as well, by expanding its “Genome” categorization technology to podcast recommendations. Today, its recommendation system leverages the Podcast Genome Project to learn more about what sorts of audio programs users like, which it then turns into suggestions of what to listen to.

Spotify, meanwhile, is best known for its hugely popular personalized playlist “Discover Weekly” which has become one of its service’s biggest draws. If Spotify were to expand that personalization technology to spoken word and then combine it with personalized music, it could potentially have the next big hit — in terms of personalized playlists — on its hands.

The “Daily Drive” playlist could also give Spotify a better way to compete with Alexa’s Daily Briefing, which has become a popular way that Echo owners start their day. Though mainly listened to in the home for the time being, Amazon has been working to expand Alexa to vehicles, through not only third-party devices, but also its own Echo Auto.

With Echo in the vehicle, Alexa users might begin to access their Daily Briefing during their commute instead of launching Spotify, as before. And once Alexa has captured the user engagement, it could be easier to then hand those users off to Amazon’s own music service, which is tightly integrated with Echo devices.

Amazon even recently launched its own free, ad-supported music service, designed only to be played only on Echo and other Alexa-enabled devices.

Spotify has been talking up its plans to turn on personalization for podcasts for some time, as it delves further into the market through acquisitions — like Gimlet, Parcast and Anchor — and investments in original content.

In the company’s earnings call with investors this week, Spotify CEO Daniel Ek noted how important its personalization technology is to its new efforts around podcasts.

“Personalization is one of our core pillars of our strategy. And there, we are obviously really, really far along in music,” he explained. “Podcasts [are] a much newer space for us. The way we merchandise podcasts, the way we recommend podcasts is completely different,” Ek continued. “Of course, we want to expand on [the product] and become an even better experience. And there, personalization is absolutely key…we’re still in the early innings,” he added.

Verizon reportedly seeking to sell Tumblr

Last year’s decision to ban porn from its platform has had a marked averse effect on Tumblr’s traffic. No surprise, really, especially given how wide the net was cast for “adult content” when it announced back in December. Now the blogging platform’s media parent is looking to sell, according to a new story from The Wall Street Journal.

The paper cites “people familiar with matter.” We reached out to Verizon Media Group (who, for the record, also own TechCrunch) and unsurprisingly got your bog standard statement about not commenting on rumors.

A sale wouldn’t be much of a surprise, given Tumblr’s history at the company. Yahoo bought the platform for north $1 billion in 2013, with Verizon inheriting it as part of its 2017 acquisition of Yahoo. Tumblr was rolled up into the short-lived Oath business, which has since been rebranded as the much more straightforward Verizon Media Group.

As the piece notes, Tumblr ultimately failed to be the money maker Yahoo and Verizon were hoping for, exacerbated by the fact that other social media properties have since taken some of the wind out of the company’s sails. A few years after the acquisition, Yahoo had written the site’s value down significantly. Verizon’s Q1 financials, meanwhile, had Media revenues 7.2 percent year over year.

The recent adult content ban has managed to both derail traffic and upset much of the site’s core user base. But Tumblr has stood firm, citing concerns over graphic child exploitation, all while arguably casting its net far too wide to include anything falling underneath the adult content banner.

It’s tough to say what media company might be in the market for a Tumblr at this point. The once white hot platform doesn’t hold the same sort cache it did when it was purchased half a decade ago. Notably, Tumblr also lost its CTO to SeatGeek earlier this week.

Brand new fiber link from Alaska to US will carry 100 terabits over the Yukon

Alaska is disconnected from the rest of the U.S. in a lot of ways — that’s kind of the point of Alaska. But giving residents the choice to participate in the modern online world is important for the state’s prosperity, and to that end one of the state’s largest broadband providers is collaborating with Canada to lay hundreds of miles of high-speed fiber connecting Alaska to the rest of the country.

The Alaska Canada Overland Network from co-op telecom MTA will directly connect the northern state to the lower 48, as the name suggests, over land for the first time, improving connections statewide with a 100-terabit connection that could grow as better tech empowers the fiber.

To be clear, Alaska is hardly an island right now — but connectivity is slower and more complicated than it needs to be for the kind of high-bandwidth tasks many take for granted. Most traffic from Alaska has to travel out through one of four aging submarine cables, traveling down to Seattle, where the nearest peering facility is, before heading out to the broader internet. Not a problem if you’re just downloading an album, but the increased latency and reduced bandwidth may make things like HD video a chore, as well as adding friction to industries like web development.

“Businesses forget about Alaska. They forget it’s part of the states,” said MTA’s CEO, Michael Burke, in an interview with TechCrunch. Although a major project recently concluded connecting outer communities to the state’s backbone, it didn’t add to the pool of bandwidth available — it might have just have put more people on the same lines. “Terabit capacity requires a lot of backhaul. And some of these cables are 20 years old — I don’t know how much more can be squeezed out of them.”

The solution is of course to build better infrastructure, but how do you do that when there are 2,000 miles of another country between origin and destination?

In fact Canada is facing similar problems, as I wrote about years ago, in that it struggles to connect its populous south with the wild, sparsely inhabited north. And it turns out that over the years investments in connectivity have added up.

20 years back or so, when work was being done laying broadband infrastructure, very little fiber had been built north away from Canada’s southern border. If you’d wanted to lay a cable from Alaska to Seattle it would have been 2,000 miles long.

“What’s happened quietly since then is there’s been a lot of fiber builds in Canada,” Burke said. “Canadians are just like the U.S., they want to build out fiber into the rural areas. And they’ve constructed fiber all the way to Haines Junction, which is only 200 miles away from the U.S.-Canada border.”

That means that instead of building a 2,000-mile deployment from scratch, they can run a couple hundred miles and connect with existing infrastructure.

It’s a win-win operation: The fiber will be used to send traffic from Alaska’s interchange at the border down to Seattle, just like the submarine cables except way faster. MTA is providing money to the Canadian providers to build out the fiber that extra distance, which they might not have invested in otherwise (there’s no government money involved, it’s a private endeavor). But having done so, the new infrastructure will add redundancy to the region’s systems and provide a base on which to build more if necessary. Traffic between Alaska and Canada will also be massively accelerated, since it can just jump in at the border rather than go through Seattle.

But isn’t even a few hundred miles of fiber a chore to lay down? Well, yes — but coastal British Columbia isn’t upper Nunavut.

“This project is more like what you’d see in any other state,” said Burke. “We’re running it along the Alaska-Canadian Highway, it’s well maintained. It’s not as technically challenging and difficult as places you have to cross terrain where there’s no roads.”

“The surveying and preliminary work has been done,” he continued. “The materials have been purchased. They’re going to start trenching and laying fiber next week — It’ll be done and lit up in 2020.”

It’s not always worth calling attention to when companies lay down fiber, as crucial as it is to our modern economy. But this is a particularly interesting case and one that stands to make a big difference for a lot of people in a place, as Burke noted, often forgotten by the vanguard of the information age.