samedi 13 juillet 2019

apple pro

Why everyone's wrong about Apple's $999 monitor stand

Apple Pro Stand
You can't beat an Apple keynote for getting people riled. If there's not something in there that makes someone utterly furious then Apple's probably not doing its job properly. So it's no surprise that this week's WWDC 2019 reveal of the new Mac Pro has provoked a torrent of online scorn; not for the Mac Pro itself, but for the Pro Stand for its swish new Pro Display XDR monitor, which comes separately for $999.
What? $999 for a monitor stand? Is Apple stupid? LOLZ!!!
Clearly we're into uncharted territory here. You buy a monitor, you kind of expect the stand to come with, right? And you absolutely, definitely don't expect to be stung for a grand.
And so the big takeaway from WWDC seems to be that Apple's completely lost the plot with its $999 monitor stand. Because seriously, what sort of idiot's going to pay that sort of money for something you should get thrown in for free?
Apple Pro Stand
The Pro Stand does all this, but is it really worth a grand? [Image: Apple]
Well, hold your horses for just one second. For starters, Apple clearly isn't stupid. It does very nicely indeed out of high-end kit where you pay a premium for the Apple badge and the slick design. Everyone knows that for the price of the average iMac or MacBook you can buy one of the best computers for graphic design that are a whole lot more powerful but some of which look like they've been hit with a sack of ugly spanners, and for many people that distinction makes it well worth paying the additional Apple tax.
So we have no doubt whatsoever that a healthy chunk of that $999 Pro Stand price tag is pure profit aimed straight at the top of Apple's ever-growing cash mountain. But we also have no doubt that Apple has done its sums and its research and decided that this is the right price for the Pro Stand.

Built to perform

When you look at what the Pro Stand actually does, it becomes apparent that this is a serious piece of kit that's built to perform. It's described as making the seven-and-a-half kilo Pro Display XDR feel weightless, it enables you to adjust the height and tilt it effortlessly, and it leaves the screen absolutely stable once it's in place. We're not engineers, but that sounds like quite an achievement to us, especially when you factor in that as well as doing all that, the Pro Stand has to be utterly reliable and built to last – after all, it's the one thing stopping your $5000 monitor smashing onto the desk.
So, we suspect that, as with so much other Apple kit in the past, once the Pro Stand's out there and people get to play with it, they're going to love it. And they're probably going to go on and on about it. The bastards.
Still, though, selling a $5,000 monitor that doesn't have a stand at all feels like a bit of a misfire. If you're not sold on the Pro Stand then there is of course another option: the $199 VESA mount that you can use to attach your Pro Display XDR to the wall mount or desk stand of your choice. We've had a bit of a look around, though, and there doesn't seem to be any sort of VESA stand that looks as good or works anything like as nicely as the Pro Stand does.
Of course, what Apple could have done is ship the XDR with a basic stand like the one on an iMac, with the Pro Stand as an optional extra. But hey, that's Apple; we suspect that this would be an inelegant solution in its view.
Selling a monitor without a stand seems like madness, but if Apple included the Pro Stand and bumped the XDR's price by $1,000, anyone who instead wanted to wall-mount their monitor – which, given that a lot of Mac Pros are going to end up in editing suites and the like, could be quite a proportion of the market – would be rightly annoyed at being charged for an unwanted high-end stand.
Apple Pro Stand
If your workspace doesn't look like this then $12,000 of new Apple hardware probably isn't for you [Image: Apple]
And ultimately, it seems that most of the people complaining about the Pro Stand and its price aren't actually the people who are going to be buying it. The new Mac Pro isn't for the average creative, it's for serious video and film production companies and the like, and while it's clearly expensive for what it is, it's also clearly going to find a market, with all the expensive extras and complete with the Pro Display XDR and its $999 stand, because it's Apple and because it does exactly what these high-end studios need while looking fantastic. Suck it down, haters.

apple pro

Apple’s $999 Pro Stand is just the latest sign of its identity crisis

Image Credit: Apple
If you’ve followed Apple for a while, you probably know that the company has spent decades evolving from a personal computer maker to a luxury retailer that outpaces diamond vendor Tiffany & Co. in revenues per square foot. But what might superficially seem like a linear progression from selling $666 motherboards out of a garage to hawking $17,000 gold watches in shopping malls has actually been a much less orderly transition — and one that at times has gone in completely different directions.
Consider, for a moment, Apple’s January 2005 introduction of the $99 iPod shuffle, which helped Apple win millions of new budget-conscious customers, and its September 2010 release of the fourth iPod shuffle — a complete music player for $49. Only months earlier, it had introduced the 27″ LED Cinema Display for $999, describing it as a “perfect fit with our powerful new Mac Pro.” By July 2011, it released the improved Thunderbolt Display at the same price.
But if there was any question that Tim Cook’s Apple isn’t looking to save its customers money, it’s been answered by this week’s tone-deaf announcement of a $999 “Pro Stand” accessory for its latest professional monitor, which, as none of its customers would have guessed beforehand, ships without a stand. As a particularly brilliant tweet noted, the Thunderbolt Display somehow was a full best-of-class LED monitor for $999, while what Apple offers now for the same price is just the stand.
Apple didn’t just get to this point on Monday. The past five years saw the introduction of gold Apple Watches and trash can-shaped Mac Pros that were seemingly made without worrying whether actual customers would purchase them. More recently, Apple has pushed up the entry prices for its flagship and near-flagship iPhones, Apple Watches, iPads, and Macs while killing off inexpensive models. If one wanted to read a message into this, it would be that Apple doesn’t want the $49 iPod shuffle, $149 iPod nano, or $299 iPad mini customers any more, at least unless they’re willing to cough up some more cash for newer models.
The complex reality is that Apple has been selling a mix of high- and low-priced products for years, historically with a greater mix of high-priced ones, but appeared to be on a trajectory to democratizing its hardware until Steve Jobs’ untimely death in 2011. Since then, however, the company has tried to simultaneously expand its footprint and satisfy Wall Street, creating an identity crisis that seems to plague every new product announcement. Is a new iPad or Mac going to be priced so that students and school districts can afford it, or is Apple going to need to give away hardware to disadvantaged schools to demonstrate that it still cares about education?
In my view, the main issue is that Apple — now run by a core team of executives who make millions of dollars each year — has become insensitive to the fact that its products are commonly purchased by people who view $159 AirPods as luxuries. Based on comments in the last two quarterly conference calls, these executives apparently learned only recently that they needed to offer multi-year financing plans and device trade-in programs to help customers buy even $749 phones, say nothing of $999 models. My take is that the people complaining about $999 Pro Stands aren’t primarily the (very narrow) target audience for $4,999 Pro Displays, but the millions of customers who could never afford them, and find the very idea of selling such things ridiculous.
While I wouldn’t begrudge Apple the opportunity to keep making “supercar” style Macs or peripherals that go alongside them, the company’s pursuit of crazy expensive projects isn’t just overshadowing its more affordable ones — it’s pushing the company to move further away from the customers it embraced as it began its steepest period of growth in its history, For that reason, Apple seriously needs to reconsider whether it wants to retreat back to making niche tools for wealthy customers, or move back into democratizing technology “for the rest of us,” as it famously said in its early, optimistic Macintosh commercials.
It wouldn’t hurt to invite a few actual (but tight-lipped) customers to sit in on keynote prep, either, to provide notes ahead of the formal announcements. If Apple’s executives are too far removed from consumer sentiment to realize that a $999 phone might cause gasps in China, or a $999 stand might generate groans in America, there are plenty of people out there who would be more than happy to set them straight before they make another unnecessarily embarrassing mistake.

Assassin’s Creed and Splinter Cell VR could be coming to Oculus

Assassin's Creed III.
Above: Assassin's Creed III.
Image Credit: MSPoweruser
A new report states that Facebook’s Oculus has signed a deal with Ubisoft for exclusive Splinter Cell and Assassin’s Creed VR games.
The article from The Information cites “two people familiar with the matter” in saying Oculus is looking to outright buy game studios and sign exclusive deals. One of these sources reportedly revealed that the company has already signed deals for Tom Clancy’s Splinter Cell and Assassin’s Creed in VR. The article does not confirm if these games will be made for Oculus Rift, Oculus Quest, or both.
A Facebook spokesperson provided us with the same statement given to The Information: “The response to Oculus Quest and Rift S gaming have been incredible. We cannot comment on specific partnerships, but we will continue to focus on expanding our library and reaching broader gaming audiences for years to come.”
The article also states that early sales of Quest, which launched in May 2019, have “substantially exceeded Facebook’s internal sales projections.”
If true, Oculus appears to be doubling down on its exclusive content priorities. Since the launch of the Oculus Rift in 2016 Oculus has published games from third-party developers like Insomniac Games and Twisted Pixel under its Oculus Studios label. While a deal with Ubisoft for these VR games would be in-line with its previous movements, Oculus has never outright bought a VR studio.
Of the two series mentioned here, Assassin’s Creed is likely the better known. The long-running series takes players to various points in history and casts them as assassins that execute targets and sneak away unseen. Ubisoft already has several location-based VR games based on the franchise.
In Splinter Cell, meanwhile, you play as an elite secret agent that infiltrates hostile areas. Ubisoft itself already has experience in VR with games like Eagle Flight and Space Junkies.
Currently, Oculus is working with the EA-owned Respawn Entertainment on an Oculus Rift first-person shooter (FPS) set to be revealed at Oculus Connect 6 this September.
This story originally appeared on Uploadvr.com. Copyright 2019

AI Weekly: Highlights from VentureBeat’s AI conference Transform

VentureBeat founder Matt Marshall at the AI Innovation Awards ceremony held July 12, 2019 in San Francisco
Above: VentureBeat founder Matt Marshall at the AI Innovation Awards ceremony held July 12, 2019 in San Francisco
Image Credit: Michael O'Donnell
This week, VentureBeat held its largest AI conference in company history. Front and center were startups tackling compelling challenges as well as executives from the largest AI companies in the ecosystem, like Microsoft, Google, Facebook, and Amazon.
With seven stages and over 1,000 people over two days, it’s tough to follow everything that happened, but here are a few highlights.

Set the table

AWS AI VP Swami Sivasubramanian and Facebook’s AI VP Jérôme Pesenti lead some of the largest AI operations on Earth, and both agree that if companies want to scale their AI operations or become AI-first companies, they have to get their data house in order first.
Whether they’re called a chief data officer or something else, the priority should be a data strategy before considering AI for your business, Sivasubramanian said.
“The number one thing I can say there is you want to get your data strategy right, because if you don’t, when you end up hiring a machine learning scientist and you expect them to come and invent amazing new algorithms, the reality is they spend a large percent of their time dealing with data cleanup and data quality setup and so forth. So getting your data strategy right is probably one of the hardest things,” he said.

Leadership from the top is essential

Whether it’s the incorporation of IoT sensors with legacy industries, starting AI initiatives, or responsible and equitable deployment of AI systems, to succeed businesses need buy-in and support from the top.
“The question is, [are you willing to] to invest 12 months of building [an AI] system? … [If you] don’t have top-down support and understanding for bottom-up initiatives, you’re going to fail miserably because there’s going to be … a longer timeline for ROI,” Hypergiant chief strategy officer John Fremont said.
In recognition of this need, in recent months both Landing.ai’s Andrew Ng and Microsoft pushed education initiatives like the AI Business School made especially for business executives.

Compute and the future of AI

Intel VP and CTO of AI products Amir Khosrowshahi and general manager of IoT Jonathan Ballon said new materials could change the way chips are made and further democratize access to compute. By contrast, Facebook VP Jérôme Pesenti talked about Facebook’s 5x growth in AI training compute use, saying it’s “what keeps me up at night.”
Advances in optimization and software will be essential to the future of AI, he  said.
OpenAI found that compute necessary for state-of the-art results has grown 10x annually since 2012.

Applied AI ain’t easy

One of the overarching themes of the conference was applied machine learning, or AI in production today.  A number of tools have been introduced to make it easier for professionals using AI to collaborate or make interoperability possible, but a fair number of AI projects still fail, IDC reported earlier this week.
Cloudera’s Hilary Mason routinely shares insights into lessons learned by enterprise customers. To avoid sabotage of your own AI project, Mason says managers need to know the limitations of the systems they use and let their team do their jobs. Landing.ai VP of transformation Dongyan Wang suggests businesses start small.

Don’t take a job at a company without clear AI strategy

“If you find yourself in an organization where they are saying, ‘Hey, we’re going to introduce AI because our competitors are using AI,’ there is a danger they will be using AI without connecting it to a business model,” he said. “I would just walk away from a project that doesn’t know why it’s using AI,” he said.

Acknowledging industry leaders

At Transform this year, VB held its first-ever AI Innovation awards ceremony to honor top applications by companies moving the industry forward and acknowledge noteworthy works.
Among the  winners: Bossa Nova Robotics inventory robots and computer vision for outstanding business innovation, Corti’s deep learning for identification of cardiac arrest events during emergency phone calls for outstanding NLP, and the work of Joy Buolamwini, Timnit Gebru, and Inioluwa Raji to highlight gender, age, and race disparity in the most used facial recognition systems in the world.
VB also handed out Women in AI awards, and AI mentor and rising star awards. Attendance at Transform by people who identify as women went from 5% last year to 30% this year.
Check the AI channel this weekend or early next week for a story on responsible deployment of AI and how to avoid ethics washing.
Both the awards and the conference itself are part of VB’s continued work to not just to cover AI news but to also be a convener and bring the disparate AI ecosystem together.
See this link for a look at complete Transform 2019 coverage.
As always, if you come across a story that merits coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel and subscribe to the AI Weekly newsletter.
Thanks for reading,
Khari Johnson
Senior AI staff writer

Samsung seeks patent on folding AR glasses with frame-activated screen

Image Credit: Samsung
Like many other technology companies, Samsung focused its early mixed reality development largely on virtual reality rather than augmented reality, but a newly published patent application (via Patently Apple) suggests that it’s actively working on AR glasses with at least one interesting feature: a display that is automatically powered on and off by the frames.
While some AR headsets — including Microsoft’s HoloLens and Magic Leap One— wrap fully around the head like industrial goggles, Samsung’s design looks closer to a pair of plastic sunglasses, and not unlike numerous Bluetooth audio glasses released by OakleyBose, and others over the years. The key difference between the audio glasses and Samsung’s design is a small square display that would appear in front of at least one of the user’s eyes, having been reflected from a prism onto the lens from a small temple-mounted projector.
Like more advanced AR headsets, Samsung suggests its projector will display a translucent image that appears atop the wearer’s field of vision, likely using a waveguide to diffract the projection in a way that conveys 3D depth. But rather than requiring a dedicated power button, Samsung would automatically power the projector on when one of the frame’s temples is opened, and turn it off when the temple is folded closed, conserving energy in the process.
Given the normal movements of a wearer’s head, keeping the projector from jittering on and off during frame jostles would be important. To that end, Samsung suggests that magnets near the hinges would be used to maintain the “open” temple position, as well as to complete a flexible electrical circuit running from one temple’s battery through the frames to the other temple’s projector.
Though the patent application discusses the display actuating technology pretty clearly, Samsung isn’t tying itself down to either the specific frame shape shown in the image above, or a single way that the AR glasses might operate. It presents the possibility that the frames might contain all sorts of different processors — standalone ARM chips and/or more dependent processors — with anything from short-range (Bluetooth) to long-range (cellular) wirelesstechnology. That sort of flexibility is fairly standard in patent applications, leaving the door open as wide as possible to prevent others from using the same idea in a similar but not precisely identical device.
While Samsung’s patent doesn’t necessarily mean that it will ever release AR glasses, the fact that it’s exploring practical implementations for wearable waveguide displays — specifically for foldable AR glasses — is encouraging. Moreover, while the patent application was just published yesterday, the filing date of January 2, 2019 suggests that its AR hardware development is actively underway. Other companies, most notably Nreal and Apple, have been working on foldable AR hardware that may eventually compete with Samsung’s offering.

To be successful with AI, you have to start small

Landing.ai's Dongyan Wang at Transform 2019
Years ago, Landing.ai founder and former Google Brain researcher Andrew Ngfamously declared that artificial intelligence is the “new electricity.” In short, AI will revolutionize the way all businesses will work in the future. But as more companies race to integrate AI into their operations, many are finding that it’s not as easy as they thought it would be.
At VentureBeat’s Transform 2019 conference in San Francisco, Landing.ai VP of transformation Dongyan Wang explained why companies seem to fail so often and the steps they need to take to make meaningful progress. He reiterated Ng’s electricity analogy, saying that when electricity was discovered more than 100 years ago, companies scrambled to figure out what it would mean for the survivability of their business.
Wang said there’s a similar sentiment now about AI in industries all around the world. An explosion of data and accessible computing power have made even non-internet companies interested in AI. The number of AI-related jobs has increased significantly, as has the number of research papers being written on the subject.
“This is the third time AI has really come around. And we believe that this time, this is the real deal,” Wang added. “We’re going to see AI being adopted in the real world and provide business value. We’re going to see that impact for the next 30, 50, or maybe even 100 years.”
Landing.ai has seen this firsthand with its customers. It works with companies who want to see how AI can help improve not just their bottom line, but also their processes. Wang said it takes about 18 to 24 months to understand his clients’ needs and help them develop an internal AI team and strategy.
He brought up an agricultural company in China that wanted to make its harvester machines collect crops autonomously. Landing.ai figured out that while AI could make these machines drive in a straight line or do simple turns on the field, it would take too much time and resources to design more complex behavior — like avoiding utility poles or even ancient tombs, which are common in rural fields in China.
“I’m not sure we want to build the largest data set of tombs and then do the best AI models to recognize these tombs so that the harvesters can go around them,” said Wang.
The idea also brushed up against one of Landing.ai’s basic tenets: If you’re just starting on AI, you should work on one or two smaller projects first to build confidence. So Wang’s team came up with an alternate solution — an AI assistant that would provide detailed information about the crops to the human drivers so that they can make better decisions.
Wang used the harvesting example to show that companies need to think carefully about what the right use cases might be for AI. Ideally, they should be small projects that you can execute within six to nine months. That’s the methodology Ng used back at Google Brain, where his team first worked on speech recognition and Google Maps before tackling the company’s core advertising business.
Once you have something in mind, the next step is to make sure that you’re using AI to automate tasks — like any sort of grueling, repetitive work — and not entire jobs. Wang said the goal isn’t to replace your workers; it’s to make them more efficient. His final piece of advice: Combine your subject matter expertise with that of AI experts so that you can figure out the right use cases for your business.
What you ultimately decide to work on in those crucial first months may make or break your company’s approach to AI.
“I want to emphasize that it’s very important to pick the right one or two projects and make sure you’re successful. Why is that? Because for a successful company — there are a lot of doubts over AI adoption and any new technology,” said Wang. “If you really fumble on the first one or two projects, it may take you a couple of years or even longer to recover and start again. But then you’ve lost that very valuable survival time for the transformation.”

If iGlasses are truly on ice, Apple’s 2020 will be dull but lucrative

Image Credit: JIRAROJ PRADITCHAROENKUL/Getty Images
One of the most difficult things for persistent Apple critics to grasp is that the company’s success — at least, as measured by Wall Street — isn’t as dependent on innovation as might be assumed from the company’s most common (and stinging) criticism. Besides the Apple Watch, Apple hasn’t launched a category-defining product in years, but that hasn’t stopped it from repeatedly breaking sales records or becoming the United States’ first trillion-dollar company.
Even so, Apple followers have been anxiously waiting for something new and exciting to shake things up, and up until this week, augmented reality glasses — iGlasses, for short — seemed to be the most likely candidate for imminent release. They’ve reportedly been in the works for years, could organically build upon Apple’s iPhone and wearable ambitions, and would instantly put to rest any discussion that Apple wasn’t innovating. Executed properly, nothing would seem more appropriately “2020” (or 20/20) to consumers than Apple-designed glasses featuring holographic visuals.
But a questionable report yesterday from DigiTimes has thrown cold water on that scenario. Citing “people familiar with the situation,” the hit-and-miss Taiwanese publication claims that Apple quietly disbanded its AR/VR hardware team in May following the departure of its reported leader, Avi Bar-Zeev. Before spending nearly three years at Apple, Bar-Zeev was the principal architect for Microsoft’s HoloLens, cofounded what became Google Earth, and worked for Amazon on an unspecified “new stealth project.” If anyone could envision a next-generation AR headset and marshal a large company’s resources to build it, Bar-Zeev looked like an ideal candidate.
Unlike these other companies, however, Apple has openly advertised its history of saying “a thousand no’s for every yes,” and proudly holding off on launching products — sometimes at the eleventh hour, following much if not all of their development — because the execution or timing didn’t feel right. It has reportedly shelved fully developed concepts to await technology breakthroughs or favorable component availability, held up hardware to improve software (see: HomePod), and on rare occasion, canceled products (see: AirPower) after formally announcing them for release.
The problem with saying so many “no’s” is that they tend to disproportionately impact big and exciting initiatives. It’s easier for Apple to say “yes” to adding another iPhone camera than to build production lines for an all-new product with risky sales prospects and the potential of public disapproval. People will tolerate an ugly iPhone screen notch or a big square camera housing if the devices can do cool new things. But if iGlasses look weird, work poorly, or don’t have much software, Apple could have much bigger problems to deal with than if it released nothing at all.
Think back to 2014’s announcement of the Apple Watch. Back then, Apple CEO Tim Cook was under pressure to prove that Apple hadn’t lost its ability to innovate after Steve Jobs’ untimely death. Rather than holding the Watch back until it was perfect, Apple revealed it more than seven months before it was ready to ship, released a disappointingly sluggish first model, and then fought an uphill battle to keep developers and users interested in the platform. Even today, as the Watch has matured into a successful product family with compelling features, it lives in the shadow of the iPhones it depends upon.
DigiTimes says that industry insiders are speculating that Apple froze AR hardware development for one or more of three reasons: It couldn’t make the glasses light enough, wasn’t yet able to integrate 5G technology, or didn’t have enough AR software ready to go. If true, any one of those problems might have hurt a finished product, but Apple could have worked around most of them, unless the hardware was unbearably large. My personal guess is that Apple would have considered iGlasses viable if they were virtually identical to Nreal’s Light, only wireless, a non-trivial engineering challenge that can only be overcome as chips shrink and demand less battery power. Limited software and a lack of cellular support certainly didn’t stop Apple Watch from launching, nor did they stop it from becoming successful; iGlasses could survive at least two iterations without being wholly standalone.
Whatever the reason may be, if iGlasses aren’t coming, 2020 may wind up being another boring year for Apple. Nothing else of comparable needle-moving excitement is believed to be in the works for imminent release, which means that the spotlight will be on the seemingly inevitable (but by then, “me too”) launch of the first 5G iPhones, and the performance of previously announced news, TV, and game subscription services. Additionally, though Apple doesn’t sell nearly as many Macs as iPhones and iPads, there could be new MacBooks with fixed keyboards, and early Macs with ARM rather than Intel chips, too.
Before you finish yawning, note that the latter prospect is worthy of a double underline, as it could seriously improve the appeal of Apple’s “traditional” computers over the next several years. If you’ve been following the chip performance curves for Apple’s A-series processors and Intel’s alternatives, you already know that last year’s iPads match or eclipse some of this year’s MacBook Pros in CPU/GPU horsepower. My belief is that Apple has held back from supercharging its entry-level Intel laptops so that their transition to ARM processors will yield large day-one gains for ARM MacBook users, rather than mere parity or steps behind. We’ll have to see how that plays out over the next year or two, but I’m optimistic that MacBooks will become much better in the very near future.
If all of those things strike you as dull at this point, you’re certainly not alone. But if history’s any guide, don’t be surprised if those iterative new products boost Apple’s 2020 revenues and shares above 2019 levels. The company might have lost its reputation for disruptive innovations, but its ability to consistently improve upon past products is all but unblemished. As hard is it may be to accept, even boring products can sell exceedingly well if they’re well-executed and properly priced.
And if the DigiTimes report is wrong, which at least one well-respected insider has suggested is the case, next year’s reality might wind up being augmented by iGlasses after all. Regardless of what happens, keep your eyes on this space, as we’ll be reporting on all the latest developments as they happen.

VW invests $2.6 billion in Argo AI as part of self-driving vehicle alliance with Ford

Argo AI
Image Credit: Argo AI
(Reuters) — Argo said VW was investing $1 billion in cash and contributing its European self-driving unit, valued at $1.6 billion. The investment deal gives Argo a valuation of just over $7 billion, one of the highest in the autonomous vehicles sector.
VW is buying the Argo shares for another $500 million from Ford, which acquired a majority stake in the Pittsburgh-based startup in 2017. VW and Ford then will each have a minority stake, as will Argo founders Bryan Salesky and Peter Rander and a pool of Argo employees.
VW and Ford each will hold two seats on the Argo board — representing a voting share of just under 30% each — while Argo will hold three seats, representing just over 40%. The companies declined to disclose their actual stakes in Argo.
Ford previously agreed to inject $1 billion over five years into Argo.
In an interview, Argo Chief Executive Salesky said: “We have two great customers and investors who are going to help us really scale and are committed to us for the long term.”
Salesky said Argo would welcome additional strategic or financial investors to help share the costs of bringing self-driving vehicles to market.
“We all realize this is a time-, talent- and capital-intensive business,” he said.
The Ford-VW partnership with Argo could help accelerate the deployment timetables of the two automakers who plan to put autonomous vehicles into operation in 2021.
Argo has been overlooked as Waymo, Alphabet’s self-driving subsidiary, has deployed its robo-vans, and GM’s Cruise Automation unit has raked in billions of dollars in investments.
With VW, the world’s biggest automaker by sales volume last year, Argo is now aligned with a partner with substantial scale and resources. VW also has a broader product portfolio that includes heavy trucks and off-road equipment that could be automated with Argo’s help.
“Our platform is scalable to just about any type of vehicle,” Salesky said.
The Ford-VW collaboration with Argo could also have broader implications for similar alliances, as well as valuations of related start-up companies.
The value of autonomous-driving startup Cruise jumped to $19 billion earlier this year after it attracted more than $6 billion in investments from SoftBank, Honda, and T. Rowe Price.
The value of ride services firm Uber’s Advanced Technologies Group climbed to more than $7 billion earlier this year after SoftBank, Toyota, and Denso invested $1 billion.
Those valuations were dwarfed by the estimates for Waymo, which is widely acknowledged as the sector leader. Morgan Stanley values Waymo at up to $175 billion, while Jefferies values the company at up to $250 billion.
VW reportedly considered a $13.7 billion investment last year in Waymo for a 10% stake that would have valued Waymo at $137 billion.
(Editing by Bernadette Baum)

Catalytic: ‘RPA is the gateway drug for AI’

Ted Shelton and Chad Rich at Transform 2019
The immediate benefit of RPA is that it can eliminate a lot of repetitive manual labor and free up humans for what they do best. But RPA also helps enterprises create a standardize framework for capturing data about how they execute processes, as well as data about how processes can get delayed or stalled.
“If you set up RPA the right way by instrumenting the process, it’s possible to gather data to use as the training set for machine learning,” said Catalytic chief revenue officer Ted Shelton in an interview at Transform 2019. “RPA is the gateway drug for AI.”
An RPA implementation not only puts the steps involved in a process into a bot script, it can also set up the framework for understanding how a process is affected by different variables.
A capital expenditure approval process, for example, might have a very specific flow, and different individuals might be involved depending on the price of equipment being requested. By automating this process and tracking it every step of the way, including responses to different kinds of requests, it’s possible to capture data about those steps and factors that go into requests being approved, delayed, or denied.
Once the company has enough data points, AI models can be created to make recommendations. For example, an AI model might suggest that a particular purchase is likely to be delayed by a request for a better justification. This can save time for everyone involved.
“This is why RPA is the gateway drug, it allows you to instrument the process,” Shelton said. “It is not just about process discovery, it can also help make sense of the results that happen in the process.”

An evolving definition of process

Historically, processes like managing customer data or facilitating purchases have been thought of as tasks that could be repeated by multiple people. Human process experts would go to great lengths to map out the various steps involved.
But other things have not traditionally been mapped. For example, if Sally is retiring, a process engineer may not map out the steps to throwing a great party. But this might be considered a process in the strict sense of the word, because it involves specific steps — such as sending out invites, recording RSVPs, reserving a space, and requisitioning supplies.
“Automation technology will allow us to treat a much broader array of activities as processes that can be automated,” Shelton said. It will also make it easier to recognize commonalities between activities. For example, a birthday party and an end-of-year holiday party might involve similar tasks.
And knowing how to throw parties might not just be good for morale, it could also boost the bottom line by making it easier to organize better sales events. For example, Chad Rich, a senior director at E. & J. Gallo Winery, said some of the more creative salespeople tend be better at organizing parties for wholesale wine buyers. Consequently, Rich’s team is looking at how they can create a party process that helps organize the details for larger sales events for the whole sales team. This involves managing details like ensuring they have enough wine lined up for the event, ordering decorations, sending out invitations, creating themed music playlists, and ensuring the event aligns with new wine product releases.

Better instrumentation is coming

RPA still requires a lot of human expertise to explain how a process works. Automated process discovery can help make sense of activities in a larger process but is are still limited to understanding individual interactions. “Today I can take a particular task out of the process and automate it, but I cannot map it across the enterprise,” Shelton explained.
Eventually, process discovery tools could instrument different aspects of the workplace. AI agents could acquire data from meetings and phone calls using automated transcription and natural language understanding tools. “It is not an intractable engineering problem, but one where the costs would far outweigh the benefits,” Shelton said.
In the short run, he expects tools to capture high value aspects for things like improving coordination of meetings or reducing overhead for salespeople in the field. “With the right technology, we can eliminate the coordination overhead,” Shelton said. And these efforts could end up becoming the building blocks for infusing AI into more pockets of the enterprise.

AWS AI VP: Developers drive all innovation in technology

AWS AI VP Swami Sivasubramanian
Above: AWS AI VP Swami Sivasubramanian
Image Credit: Michael O'Donnell
In a wide-ranging discussion today at VentureBeat’s AI Transform 2019 conference in San Francisco, AWS AI VP Swami Sivasubramanian declared “Every innovation in technology is going to be driven by developers.”
Sivasubramanian made the statement while talking about growing demand for machine learning engineers and internal efforts at Amazon to train more employees to use machine learning. Facebook VP Jérôme Pesenti also stressed plans to make machine learning part of each employee’s job at Facebook. And earlier today, Amazon committed $700 million to upskilling its U.S. workers.
“Amazon developed what we call Machine Learning University. This is what we use to train our own engineers on machine learning, even if they didn’t take it as part of their own university [course work],” Sivasubramanian said. “When we externalized it as part of the AWS training and certification platform, we had more than 100,000 people register to start learning ML in less than 48 hours. Think about that: That’s the level of appetite we’re starting to see.”
One Amazon employee with no prior knowledge of machine learning used AWS SageMaker to create a computer vision system that recognizes whether a cat is carrying dead prey into the house. If the computer vision system predicts with high confidence that kitty is in fact carrying a dead animal, the system locks the pet door.
In response to a question from Wing research partner and moderator Rajeev Chand, Sivasubramanian agreed that corporations should have a C-level executive in charge of AI deployment. But he believes the first step is a sound data strategy.
“I think [companies should have C-suite leadership on AI], but I do not think the title is chief AI officer or chief analytics officer or data officer,” he said. “The number one thing I can say there is you want to get your data strategy right, because if you don’t, when you end up hiring a machine learning scientist and you expect them to come and invent amazing new algorithms, the reality is they spend a large percent of their time dealing with data cleanup and data quality setup and so forth. So getting your data strategy right is probably one of the hardest things.”
Pesenti echoed this thought shortly after Sivasubramanian, suggesting that companies anxious to get started with AI hire a chief data officer before they go in search of a chief AI officer. He too believes companies that get their data sets in order have a better chance of collecting the right data to power AI model training.
Sivasubramanian touched on myriad topics onstage, such as whether AI should look like humans (he believes it should not), how stupid AI is today, and how global corporations should think about the use of facial recognition software.
Addressing the topic of AI’s relative stupidity, a subject discussed with Google Cloud AI chief Andrew Moore on the first day of Transform, Sivasubramanian spoke about his four-year-old daughter.
“She learned to recognize a tomato by looking at a tomato probably three times, whereas a machine learning computer vision system, arguably it used to require seeing 10,000 pictures. Now it’s probably like half [that]. It’s not yet at the level of being able to deal with ambiguity in the same way that humans are,” he said, adding that AI is good at doing things humans find boring, without making errors.

Making the customer’s journey convenient, not creepy

Jessica Lachs and Chris Hansen at Transform 2019
Personalization is a key to building customer trust. But such tools can also go too far. Companies risk personalizing an offering so much that a user’s view of what’s available is restricted. Beyond that, high levels of personalization can be downright creepy.
“I think the trick is to ask permission, you have to be transparent about it. It can’t be a surprise,” said Chris Williams, chief product officer at iHeartMedia, at Transform 2019 in San Francisco today.
Thinking about personalization from the experience level of the customer, including what’s comfortable, is essential, added Chris Hansen, senior director for digital at TGI Fridays.
Hansen said he focuses on three steps when it comes to personalization.
  1. Identify a problem
  2. Tie it to your business goals
  3. Find how the technology will help solve that problem
By treating user habits as the basis for recommendations — like noting which meal a customer tends to order on a particular weekday evening and then suggesting the same meal on that same evening the following week — TGI Fridays has been able to come off as cool, not creepy, according to Hansen. Most customers are aware that apps collect a number of data points and that they need data in order to make recommendations. Offering a solution to the problem of trying to figure out what to eat also dovetails into TGI Fridays’ business goals.
With its personalization tools, TGI Fridays has increased engagement on social channels by more than 500%, and online revenue has grown by more than 100%, Hansen said.
DoorDash also relies on identifying user habits to key in context and make the right recommendations. The company found that email reminders were a simple way to stay on customers’ minds, said Jessica Lachs, vice president of analytics at DoorDash. This helped the company not only improve its click-to-open rate, but conversion rates, as well.
Again, customer experience is at the forefront of everything DoorDash is pursuing. “All of the testing that we’re doing is to improve [the experience] for customers,” she said.
From email click-throughs, DoorDash was better able to predict the kinds of restaurants customers might be interested in, based on previous places they had dined, Lachs said.
Companies need to look for contextual clues and remember that being straightforward about what their tools do will go a long way toward making the customer feel comfortable, not creeped out.
iHeartMedia, for example, will alert a user to its audio recommendations for multiple activities as opposed to just one — since users might have different playlists for their morning routines than they do workouts, for instance, and so personalizing for a specific user is actually personalizing for multiple contexts.
“And we found that the more and more we got it right, the byproduct of that was that users who are subscribers to one of our on-demand services had a higher retention rate, because we built up radio trust with them,” Williams said.
“Hopefully it’s not too creepy for you.”

How Walmart is getting more out of its data thanks to Nvidia’s Rapids

Bill Groves and Josh Patterson at Transform 2019
Forecasting demand for products used to take Walmart weeks. Now, with the help of Nvidia, this key component of supply chain management can be completed in a matter of hours.
Walmart is among the first companies to work with Nvidia’s open source Rapids software, which allows the retail company to more easily churn data through GPUs that run at many times the speed of traditional CPUs.
Rapids enables Walmart to run the software it already uses on GPUs, rather than requiring the company to use Nvidia’s programming platform CUDA.
“The whole goal of Rapids is really to change nothing, just to make it really easy to use, and leverage all the speed and power of the GPU,” Nvidia general manager of data science Josh Patterson said at Transform 2019.
Walmart chief data officer Bill Groves said the software has opened up new possibilities. Because it can now rely more on the faster computing of GPUs, for example, Walmart is able to use computer vision in previously prohibitive situations. This means the company can get more from data it’s already gathering — from sources such as the Bossa Nova robots that scan merchandise in the aisles of a growing number of Walmart stores.
Rapids and the expanded use of GPUs mean Walmart will also be able to better account for one-off bursts of interest in specific products, said a spokesperson. For example, when Elmer’s Glue suddenly surges in popularity at an unexpected time of year because kids are making slime, Walmart can now account for the trend and ensure glue is on the shelves.
It also means Walmart can better leverage data collected by handheld devices its personal shoppers use and help those employees work more efficiently. Additionally, Walmart can ensure there’s sufficient merchandise on shelves for both personal shoppers and customers shopping in person for themselves, the spokesperson said.
Just as important, the expanded capacity to use data means the company can more quickly and easily keep products customers don’t want off the shelves. Data isn’t collected just from store shelves, but throughout the supply chain and away from the eyes of customers.
“These are problems we can solve today that we couldn’t have three months ago, six months ago, two years ago,” said Groves.

Gartner: PC shipments grew 1.5% in Q2 2019

Black variants of the Surface Laptop, Surface Pro, and Surface Studio
Image Credit: Walking Cat
The PC market rebounded slightly in Q2, according to research firms Gartner and IDC. After six years of quarterly PC shipment declines, 2018 was mixed, with negativeflat, and positive quarters. 2019 is shaping up similarly: Q1 was negative and Q2 is positive, just like in 2018.
Gartner and IDC analysts pointed to the Windows 10 refresh in the business market as contributing to the past quarter’s gain. But IDC warned it wouldn’t last. Gartner noted that while the U.S.-China trade war had not affected the PC market, the next phase could, since most laptops and tablets are currently manufactured in China.
(Gartner also shares U.S.-specific figures and in Q3 2018 found that Microsoft had broken into the top 5 PC vendors. The company held onto this position for the following three quarters. But that’s U.S.-only — Microsoft still doesn’t appear on worldwide charts.)

Gartner

Gartner estimates that worldwide PC shipments grew 1.5% to 63 million units in Q2 2019. The top six vendors were Lenovo, HP, Dell, Apple, Acer, and Asus.
As you can see in the chart below, Gartner found that in the top six, only the top three saw gains in PC shipments. The rest of the market was down 6.7%.
Lenovo has finally decisively pulled ahead of HP. Dell’s growth has slowed, but at least it’s still growing.
“There are signs that the Intel CPU shortage is easing, which has been an ongoing impact to the market for the past 18 months,” Gartner principal analyst Mikako Kitagawa said in a statement. “The shortage mainly impacted small and midsize vendors as large vendors took advantage and continued to grow, taking market share away from the smaller vendors that struggled to secure CPUs.”

IDC

IDC estimates worldwide PC shipments grew 4.7% to 64.9 million units in Q2 2019. The top five vendors in IDC’s results were Lenovo, HP, Dell, Acer, and Apple.
IDC also found Lenovo first and HP second, with Dell rounding out the top three. While all three saw growth, Lenovo was significantly ahead. The rest of the market grew by 4.9%. The exact numbers, for your perusal:
“With the January 2020 end of service (EOS) date for Windows 7 approaching, the market has entered the last leg of the Windows 7 to Windows 10 commercial migrations,” IDC research manager Jitesh Ubrani said in a statement. “However, the closing sprint is unlikely to generate the spike seen when Windows XP met its EOS because we are further ahead of the migration, with two quarters to go. Still, organizations looking to finish their migration will create new opportunities for the market in the coming quarters.”

T-Mobile/Sprint merger reportedly hinges on Dish assets

Image Credit: Reuters
T-Mobile’s merger with Sprint may have the FCC backing it needs to become a reality, but prolonged negotiations with the U.S. Department of Justice are dragging the process out, the Wall Street Journal reports today, as the parties work to set terms for the transfer of certain assets to a future competitor, Dish Network. While all parties are said to be optimistic about the deal, T-Mobile and Sprint will reportedly once again extend their merger plan, this time past a July 29 deadline, to reach an acceptable arrangement.
The third- and fourth-place U.S. carriers have worked since April 2018 to cement a deal that will pass regulatory muster, repeatedly describing their $26 billion tie-up as an opportunity to improve competition at the top of the U.S. cellular industry. Describing itself as the “uncarrier,” T-Mobile has said it will leverage the combined companies’ customer base, employees, and spectrum offerings to create a more viable challenger to larger carriers Verizon and AT&T, complete with more pervasive nationwide coverage, particularly in rural and previously underserved areas.
Regardless, Justice Department officials continued to question the merger’s impact upon consumer prices, and reportedly sought additional measures to prop up potential competitors underneath the country’s top tier. Following a merger between T-Mobile and Sprint, all three of the top U.S. carriers would have around 100 million customers, while the next-largest carrier, U.S. Cellular, has fewer than 5 million.
To that end, the Justice Department reportedly hosted negotiations with T-Mobile and Sprint officials to divest and transfer assets to Dish Network, which already owns significant national spectrum licenses and has previously floated the prospect of developing its own 5G network for $10 billion. While the carriers agreed to the divestment, talks have slowed based on asset ownership requirements requested by T-Mobile, which wants to prevent Dish from reselling its assets to a cable or technology company, or from overwhelming T-Mobile’s network under a required asset-sharing agreement.
Resolving the issues with Dish appears to be the last hurdle before T-Mobile and Sprint receive the final sign-off from all required federal regulators, but the companies still face a combined lawsuit from multiple state attorneys general — a situation similar to one that killed a deal between T-Mobile and AT&T in 2011, albeit with the Justice Department’s backing. Assuming the carriers can’t settle that dispute, a trial to block the merger at the state level will begin October 7.

Microsoft Teams has 13 million daily active users, beating Slack

Microsoft Teams, which launched worldwide in March 2017, has 13 million daily active users and 19 million weekly active users. This is the first time the company has released daily and weekly usage metrics for Teams. Microsoft also announced some new features for Teams, specially targeting health care organizations and firstline workers.
Teams is the company’s Office 365 chat-based collaboration tool that competes with Google’s Hangouts Chat, Facebook’s Workplace, and Slack. Back in March, Microsoft shared that Teams is used by 500,000 organizations, just two years after launch. For months, Microsoft had called Teams its fastest-growing business app ever, but it refused to share how many individuals were using Teams — until today.
We have guessed for a long time that Microsoft Teams was bigger than Google’s and Facebook’s offerings. Google launched Hangouts Chat in February 2018, when 4 million businesses paid for G Suite, and it still hasn’t shared how many organizations use it. In February, Workplace by Facebook passed 2 million paid users.
But we assumed Slack was bigger, and that Microsoft would share user numbers once that had changed. As of January, Slack had 10 million daily active users. It’s safe to say Microsoft Teams is now the most-used chat-based collaboration tool.

New features

In addition to the usage reveal, Microsoft Teams is also getting a slew of new features. They are rolling out now, this month, next month, or “soon.” Here is a quick rundown:
  • Now: Announcements allow team members to highlight important news in a channel and are a great way to kick off a new project, welcome a new colleague, or share results from a recent marketing campaign.
  • Now: The new time clock feature in the Teams Shifts module allows workers to clock in and out of their work shifts and breaks right from their Teams mobile app. Managers have the option to geo-fence a location to ensure team members are at the designated worksite when clocking in or out.
  • The Teams client is now available to existing installations of Office 365 ProPlus on the Monthly Channel.
  • July: Priority notifications alert recipients to time-sensitive messages, pinging a recipient every two minutes on their mobile and desktop until a response is received.
  • July: Read receipts in chat displays an icon to indicate when a message you have sent has been read by the recipient.
  • July: Channel moderation allows moderators to manage what gets posted in a channel and whether a post accepts replies.
  • August: Targeted communication allows team owners to message everyone in a specific role at the same time by @mentioning the role name in a post. For example, you could send a message to all cashiers in a store or all nurses in a hospital.
  • August: A Teams trial offering will allow Microsoft 365 partners to initiate six-month trials for customers.
  • Soon: Channel cross posting allows you to post a single message in multiple channels at the same time.
  • Soon: Policy packages in the Microsoft Teams admin center enable IT admins to apply a predefined set of policies across Teams functions, such as messaging and meetings, to employees based on the needs of their role.
Whether you use Microsoft Teams daily or just once a week, you’ll probably end up using at least one of these.

Microsoft launches Azure Lighthouse in general availability, updates Azure Migration Program

Image Credit: Mike Mozart
Next week marks the kickoff of Microsoft’s annual Inspire convention in Las Vegas, where the Seattle company reliably announces a slew of enterprise product updates across its portfolio. This year, in addition to Microsoft Teams news and a new AI for Good initiative, it launched Azure Lighthouse in general availability alongside Azure Migration Program enhancements

Azure Lighthouse

In essence, Azure Lighthouse is a control panel that integrates with portals, IT service management (ITSM) tools, and monitoring tools to let service providers view and manage Azure deployments across customers. It’s powered by Azure delegated resource management, a capability that allows companies to delegate permissions to the providers in question and perform operations on their behalf over scopes, including subscriptions, resource groups, and individual resources.
As Azure Compute corporate vice president Erin Chapple explains in a blog post, once customers assign resources to individual providers, the providers can in turn extend access to users or accounts in its tenant within the constraints specified by the aforementioned customers using Azure role-based access control mechanisms. Standard mechanisms work as if customer resources were resources in the provider’s own subscriptions and regardless of the licensing construct at play (e.g., pay-as-you-go).
“Inspired by Azure partners who continue to incorporate infrastructure-as-code and automation into their managed service practices, Azure Lighthouse introduces a new delegated resource concept that simplifies cross-tenant governance and operations,” wrote Chapple. “Partners can now manage tens of thousands of resources from thousands of distinct customers from their own Azure portal or [command line interface] context.”
Azure delegated management furthermore enables service providers to automate status monitoring, and to apply, create, update, change, and delete changes across multiple customers’ resources from a single location. Additionally, it allows both customers and service providers to see who took actions on the resources thanks to Azure’s Activity Log and the newly built resource provider, Microsoft Managed Services, which helps services determine if a call was made from a resource’s home tenant or a service provider’s tenant.

Azure Migration Program

Alongside Azure Lighthouse, Microsoft rolled out updates to its Azure Migration Program, a service designed to help enterprises move systems, apps, and data to Microsoft’s cloud platform. It comprises step-by-step guidance from experts and specialized migration partners, along with technical skill building through courses and free migration and cost assessment tools like Azure Migrate and Azure Cost Management.
Starting this week, Azure Migrate Program participants will gain access to first-party tools like Server Assessment, Server Migration, Database Migration Service, and App Service Migration Assistant, as well as utilities from Carbonite, Cloudamize, Corent, Device42, Turbonomic, and UnifyCloud (with additional integrations on the way). That’s in addition to offers to reduce migration costs, including Azure Hybrid Benefit, free Extended Security Updates for Windows Server and SQL Server 2008, and agentless migration and support for Hyper-V assessments.
“Azure Migrate delivers a unified, integrated experience across Azure and partner migration tools, so customers can identify the right tool for their migration scenario,” said Azure corporate vice president Julia White. “I couldn’t be more excited about the collective opportunity that lies ahead of us and look forward to helping our customers confidently plan and migrate to Azure.”

Microsoft’s Azure Kinect Developer Kit begins shipping in the U.S. and China

Azure Kinect
Image Credit: Chris O'Brien / VentureBeat
During a keynote at Mobile World Congress 2019 in Barcelona earlier this year, Microsoft took the wraps off of Azure Kinect Developer Kit, a $399 all-in-one perception system for computer vision and speech solutions and the evolution of the Seattle company’s Project Kinect for Azure. Today ahead of its Inspire conference in Las Vegas, Microsoft announced that Azure Kinect Developer Kit is generally available in the U.S. and China and will begin shipping to customers who preordered it.
“Azure Kinect is an intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” said Microsoft Azure corporate vice president Julia White in an earlier statement. “It only makes sense for us to create a new device when we have unique capabilities or technology to help move the industry forward.”
Azure Kinect combines a 1-megapixel (1,024 x 1,024 pixel) depth sensor — the same time-of-flight sensor developed for HoloLens 2 — with a 12-megapixel high-depth camera and spatial 7-microphone array, all in a package about 5 inches long and 1.5 inches thick that altogether draws less than 950mw of power. Developers can toggle the field of view (thanks to a global shutter and automatic pixel gain selection), and the Developer Kit works with a range of compute types that can be used together to provide a “panoramic” understanding of the environment.
“The level of accuracy you can achieve is unprecedented,” White added.
Through Microsoft’s early adopter program, one customer — Ava Retail — used Azure Kinect and the Azure cloud to develop a self-checkout and grab-and-go shopping platform, while health care systems provider Ocuvera tapped it to detect when patients fall and to proactively alert nurses to likely falls. A third tester, DataMesh, experimented with comparing digital car design models to physical parts on the factory floor.
The launch of Azure Kinect comes nearly nine years after Microsoft released the first Kinect as a motion-sensing gaming peripheral for its Xbox 360 console.

Huawei confirms July launch of first 5G phone, the Mate 20 X 5G

Huawei’s original plan for 2019 called for its foldable Mate X to become its first 5G phone, but a last-minute delay pushed the $2,600 device into September. Now the embattled Chinese company has confirmed that its more conventional smartphone, the Mate 20 X 5G, will be its first 5G phone, with official launches across multiple countries scattered throughout July.
On the surface, the Mate 20 X 5G looks virtually identical to Huawei’s Mate 20 X, sporting the same 7.2-inch AMOLED screen, 24MP front-facing camera, and three rear-facing cameras ranging from 8 to 40MP. Huawei is offering the 5G version in a distinctive jade or emerald green color with conspicuous 5G branding on the back to externally differentiate the two models.
Internally, while they share the same Huawei-developed 7-nanometer Kirin 980processor, the 5G version swaps other components in the name of faster network performance and greater storage capacity. The Mate 20 X 5G’s Balong 5000 modem is paired with 8GB of RAM, 256GB of storage, and a 4,200mAh battery, compared with the stock Mate 20 X’s 4G-only modem, 6GB of RAM, 128GB of storage, and 5,000mAh battery.
In terms of raw 5G performance, the Mate 20 X 5G should be a solid performer on early Asian and European mid-band 5G networks. Huawei notes that it supports both the earliest 5G non-standalone (NSA) standard and the newer 5G standalone (SA) standard, as well as prior 2G, 3G, and 4G standards. It also has dual-SIM support for 4G and 5G cards.
UAE-based carrier Etisalat says that it will start offering the Mate 20 X 5G in its stores starting on July 12 for 3,523 Dirham ($959), with preorders available now. Italian customers can preorder the Mate 20 X 5G via Amazon for €1,100 ($1,238) with a July 22 release date, while Huawei says it will release the device in China on July 26.
Multiple carriers in other European countries, including Switzerland’s Sunriseand Monaco’s Monaco Telecom, previously announced that they would offer the Mate 20 X 5G to customers, but the device still appears to be on preorder across multiple territories. U.K. carrier Vodafone notably said in May that it planned to carry the device “soon,” but quietly omitted it from last week’s 5G launch amid ongoing questions regarding Huawei’s 5G security and ability to get Android software updates.

Mars automation director shares his RPA wish list

RPA has played an important role in helping Mars, the candy giant, automate many of its processes and save time and money along the way. But the technology is in its early days, said John Cottongim, automation director at Mars, who is leading the company’s digital transformation efforts. At Transform 2019, he presented his wish list of new capabilities that would make it easier to scale up RPA for large enterprise deployments.

Ubiquitous AI/ML

Enterprises are still struggling with adding AI to workflows using point-solutions. “There is no framework I have seen that is overarching,” Cottongim said. The industry has yet to create standards for many important aspects of RPA, including process data and AI capabilities. As soon as industry comes up with better ways for standardizing the data, he said, the automated learning can become a more practical task.

Improved shop floor UI/UX

The RPA packages are all great at generating basic bot automations but don’t make it easy for these bots to collaborate with users. Cottongim thinks RPA tools should take a cue from lightweight workflow development like Appian or Microsoft PowerApps to enable better two-way communications with the teams using the bots.

Industry standards

Each RPA platform includes separate file formats and processes for scripting and managing bots. Cottongim believes that an open source approach would help with standardization. It would also allow bot developers to invest more effort in adding value to an interoperable bot ecosystem. This would make it easier for enterprises to weave together best-of-breed components for complex or specialized workflows.

Self-healing and self-learning

RPA bots can break as soon as a button in an app moves or changes color. Cottongim hopes that machine learning will make it much easier for RPA apps to adapt to UI changes and eventually even workflow changes. “We need to build in some self-healing capabilities,” Cottongim said. In the meantime, bot development tools currently make it easy to configure and change bots so that downtime is low when things do break. But the long-term goal should be close to zero downtime.

Automated process discovery tools

Cottongim sees three main approaches to process discovery today:
  1. Mining back-end data logs like Celonis
  2. RPA widgets that run on an employee’s desktop
  3. Tools like FortressIQ Virtual Process Analyst that use machine vision to infer what is happening on a user’s desktop
These are still in the early stages in terms of the kinds of processes they can interpret, Cottongim said. He believes a larger company may have between a hundred thousand to a million processes, and that 20% of these may ultimately be automated. But creating these automations at scale will require more practical ways to identify and generate bots in a secure and manageable way.
This is particularly important because, Cottongim says, there is a tendency to focus RPA effort on processes people complain about. In practice, these types of processes tend to be harder to automate because they require a lot of human engagement. In contrast, simple processes that no one ever thinks about end up being much better candidates for automation.

Creating bots is easy — scaling them is another matter

The field of Robotic Process Automation (RPA) has seen a major boom thanks to the use of AI tools that make it easier to streamline the development of software robots. At Transform 2019 this week, experts weighed in on what will be required to take RPA from a simple point solution to a robust digital factory. The goal is not so much to replace humans, but to find better ways to complement human workflows.
Telecom giant CenturyLink discovered that scaling and managing a bot workforce required a thoughtful approach. Brian Bond, consumer vice president at CenturyLink, said things started changing when they got up to around 100 bots. “After that, a lot of the initial bot developers were doing maintenance on existing bots. Something could change in Salesforce or another tool, and you have to maintain that,” he said.
RPA bots excel at cutting and pasting data across multiple applications. This is particularly important at CenturyLink, which, due to growth via acquisitions, had a mishmash of different applications for similar processes. “Call centers agents and field technicians would end up doing a lot of swivel chair copy and pasting activities,” Bond said.
RPA allows CenturyLink to automate many processes that span different apps quickly. This gives the IT team time to be more strategic in more significant digital transformation activities that might involve replacing disparate application across the company.

Improving the approval process

The initial bot efforts were easy to implement. In the early days, Bond’s team was able to spin up 50 bots in 60 days. However, they started running into various management problems as the number of bots grew. “This got us to think about the things we need to do from a management perspective,” Bond said. This included better resource planning and implementing a stage-gate process for bot creation and change. A stage-gate strategy divides a larger process into a series of approval points so that subject matter experts, finance, security teams, and management could all review new bots before they are unleashed into the organization.
Initially they were trying to manage this process through SharePoint, which got complicated when they were trying to build 25-30 bots in parallel. So, Bond’s team built their own proprietary system for managing the bot stage-gates. Now the appropriate expert is automatically notified when a bot requires approval. These experts can also explore the status of all bots under consideration from a comprehensive dashboard.
Bond said it was also important to communicate financial and performance feedback about the bots so that managers could make better decisions. This involved coming up with a set of scoring and logging standards that measured things like how many times a bot ran in a month, and the amount of time and money they saved over humans doing the same tasks. This has helped Bond’s team to gain funding back from other parts of the company.

Keeping the right talent in-house

CenturyLink also had to rethink its outsourcing strategy. In the early stage of the RPA initiative, it seemed easier to outsource many aspects of bot development, such as process analytics and solutions architecture. This allowed them to quickly build up the first set of bots.
But some aspects of these jobs involved a steep learning curve. It could take four to six months for an expert to get up to speed on the nuances of CenturyLink’s bot ecosystem. Then the contract would end, the expert would leave, and a new expert would have to start learning all over again. “It was critical to bring some of those roles in house,” Bond said.
This is especially true if you want to scale successfully. The first wave of RPA tools has focused on proving utility at a small scale, said PD Singh, vice president of AI at UiPath, an RPA vendor. Now RPA tools need to incorporate new capabilities for bot management as part of an integrated system that works in conjunction with ERP and CRM. “You need to build it integrated with other functional pieces in the organization,” Singh said.
For example, process analytics capabilities are required to figure out how much time and money companies are spending on a particular process. This can help prioritize which processes should be automated first to provide the most value. After a bot has been created, managers can see how much money or time it saves in practice. These same tools could also determine if there were other benefits, like improving the customer experience by reducing problem resolution times.

Automatically building bots

Going forward, Singh expects to see AI help to reduce the effort of creating new bots. UiPath plans to release a new feature in August that can automatically generate a basic bot file by observing a subject matter expert over time. The resulting file could be used by bot development experts to create a robust, production-ready bot in a fraction of the normal development time.
Down the road, UiPath will also be adding a feature that makes it possible to observe front line workers to identify common processes. For example, it might observe that 500 agents execute the same processes throughout the day. “Once you get those insights, you can do more and more there,” Singh said.
UiPath is also working on new capabilities to weave visual and document understanding into RPA apps. Visual understanding will make it easier to interpret image data as part of a process, such as identifying items as part of an automated checkout system like Amazon Go. Document understanding will make it easier to move data from documents such as shipping logs into the right fields in an ERP database. The company is also working on an AI Fabric that promises to make it easier to weave third-party AI components into RPA workflows.

Hypergiant: Companies that don’t have top-down support for AI initiatives will ‘fail miserably’

VentureBeat Transform 2019
Above: Hypergiant cofounder and chief strategy officer John Fremont.
Sending satellites into space, gathering data from Earth, and applying machine learning to turn the resulting data into actionable intelligence isn’t as outlandish as it sounds. In fact, it’s a business that’s booming, driven by increased demand for agricultural, consumer, and industrial analytics; Northern Sky Research pegs the satellite-based Earth Observation (EO) market at $6.9 billion annually by 2027 and $54 billion cumulatively over the next decade.
Startups like Cape Analytics and Descartes Labs, which combine computer vision with geospatial imagery to help customers evaluate properties, have benefited enormously from the newfound investor interest. So too has Hypergiant, which this year launched a division with the stated mission of creating “a vertically integrated geospatial intelligence and infrastructure capability” that its customers can use to glean insights.
Onstage at VentureBeat’s Transform 2019 conference, cofounder and chief strategy officer at Hypergiant John Fremont spoke about the barriers to adoption facing the industry and current and future growth trends.
“The pace of change that we all face is so dramatic and the complexity of what we’re facing right now is exponentially [high],” said Fremont, who cofounded the company with CEO Ben Lamm and Will Womble. “The question is, [are you willing to] to invest 12 months of building [an AI] system? … [If you] don’t have top-down support and understanding for bottom-up initiatives, you’re going to fail miserably because there’s going to be … a longer timeline for ROI.”
Hypergiant recently acquired Satellite & Extraterrestrial Operations & Procedures (SEOPs), a company that launches and deploys small satellites. (It previously had three businesses under its umbrella, encompassing intelligent software to hit business goals; an intelligent vision system designed to mimic human perception; and data capture and analysis using AI, deep learning, and advanced imaging.) Lamm told VentureBeat in May 2019 that Hypergiant will use SEOPs’ platform to launch and deploy a constellation of data-collecting “smart” satellites.
Investments like these don’t happen overnight, said Fremont. They require organization-wide support and commitment both internal and external.
“[The most successful companies implementing AI] have a mandate from the CEO — they have real dollars behind it and functional business owners who have decision-making abilities to buy the technology directly, without a ton of bureaucracy,” he said. “Anybody at any organization who’s discrediting what’s happening [with AI], or not acknowledging that it’s actually an industrial revolution, [are going to contribute] to failure.”
Work at Hypergiant has already begun in earnest. In a pilot test with a Fortune 500 oil and gas company, the company applied its AI systems to internal data sets complemented by purchased satellite data. Separately, it’s actively tracking crop yields and setting growth predictions for unnamed agricultural clients.
Elsewhere, Hypergiant operates Hypergiant Ventures, an AI investment fund, and Hypergiant Applied Sciences, a product incubation studio. Hypergiant Ventures made roughly a dozen seed round investments in companies with AI-related technologies last year, with initial investments in AI platform providers PilosaCerebri AI, and ClearBlade.
Hypergiant Space Age Solutions, Hypergiant’s commerce services division, has snagged customers like GE Power, Shell, and Apollo Aviation and is now doing “significantly more than $10 million” in revenue. Lamm projected that it would be at 100 employees by year-end 2018.

Exyn Technologies raises $16 million for drones that map underground spaces

Exyn Technologies
Above: An Exyn Technologies drone.
Image Credit: Exyn Technologies
Exyn Technologies, a company developing sophisticated autonomous robotsystems, last week revealed that it has closed a $16 million series A round led by Centricus, with participation from Yamaha Motors Ventures, In-Q-Tel, Corecam Family Office, Red and Blue Ventures, and IP Group. This capital infusion brings the startup’s total raised to over $20 million, and CEO Nader Elm says it will fuel commercial growth by expanding Exyn’s customer base and accelerating its product R&D.
“Exyn has demonstrated the potential to revolutionize efficiency, increase productivity, and dramatically reduce human exposure to unsafe environments,” said Elm. “We are only beginning to scratch the surface of how impactful true autonomy will be.”
Exyn, a spinoff of the University of Pennsylvania’s GRASP Laboratory that was cofounded by Elm and dean of the University of Pennsylvania’s engineering school Vijay Kumar in 2014, develops modular systems dubbed Advanced Autonomous Aerial Robots (or A3Rs) that can navigate and collect data in the absence of reliable maps and GPS. For the better part of four years, Exyn has been refining “swarm” technologies designed to operate in dangerous environments Elm characterizes as “digitally starved” (i.e., lacking reliable data, like underground mines and indoor buildings).
Exyn’s software platform — ExynAI — doesn’t rely on human control, and it taps a combination of pulsed laser lights, redundant systems, mapping, and independent planning to avoid obstacles without intervention and record point clouds, imagery, gas readings, and more. Elm says the company’s robots were deployed in the field for the first time last year, and he says Exyn is currently working with customers in mining and defense (including the Defense Advanced Research Projects Agency) as it investigates new industries and applications.
“Exyn is changing the game in terms of what true autonomous robotics technology can deliver to the world,” said IP Group’s Michael Burychka.  “The support Exyn has seen in this latest round of funding, from both strategic and financial investors, stands as a major endorsement of their technological capabilities and vast commercial opportunity.  As founding investor and early advisor, we are proud of what Nader and the Exyn team have accomplished so far and are excited to see them continue to scale the business.”
“We see significant potential for Exyn’s technology to improve efficiencies in sectors including mining and look forward to supporting Exyn’s next phase of growth,” added a Centricus spokesperson.
Exyn, which is headquartered in Philadelphia, says its team hails from Georgia Tech, Johns Hopkins, Boeing, SRI, Sikorsky Aircraft Corporation, and United Technologies Research Center and that a number of its engineers participated in DARPA’s Urban Challenge.

Unconfirmed reports suggest Apple has killed AR glasses project (updated)

Nreal's smartglasses cost $500.
Above: Nreal's smartglasses cost $500.
Image Credit: Nreal
Apple’s project to develop augmented reality glasses qualifies as an open secret, as the company has hired AR engineers, filed related patents, and reportedly iterated on the hardware for two or more years. But a new report from the hit-and-miss Taiwanese supply chain publication DigiTimes (via MacRumors) claims that Apple has “terminated AR glasses development,” which if true would be devastating news for the innovation-focused company.
At the moment, the report appears on a paywalled, breaking news section of the DigiTimes site ahead of its formal appearance in Chinese-language publications, so substantiating details are not yet available. But DigiTimes has a track record of providing early information on developments within Apple’s supply chain, some major and some minor, with a mixed but mostly positive record of accuracy.
Well-sourced rumors claimed Apple was working on AR glasses that would run a new operating system, “rOS” — which, like watchOS, would have been based on the smartphone operating system iOS. Early reports suggested the glasses might be standalone or depend on an external computer-like box, but more recently Apple was said to be leaning on the iPhone to handle computing for the headset.
Apple’s reported progress through those options notably coincided with the releases of rival AR headsets, including Microsoft’s all-in-one HoloLens, Magic Leap’s wearable puck-tethered Magic Leap One, and Nreal Light. That particularly lightweight pair (shown above) has its own Qualcomm Snapdragon 845 processor and relies on a USB-C-connected Qualcomm Snapdragon 855-based smartphone for much of the heavy lifting, an arrangement similar to Apple’s most recently reported plan.
Though AR has seen only limited consumer interest, due as much to sky-high hardware prices as limited applications, Nreal’s $500 Light design appeared set for a market breakthrough this year. But the company was sued last month by Magic Leap, which claims that Nreal’s founder stole enabling concepts and technologies, allowing the startup to offer a comparatively affordable and lightweight option without incurring years of R&D expenses.
Like Nreal, Apple was believed to be working on a design that was all but indistinguishable from conventional glasses. When Apple announced that chief design officer Jony Ive was leaving the company to form his own design firm, LoveFrom, reports suggested Apple’s internal design team was currently working on the AR glasses project as one of its major new initiatives.
Absent dedicated hardware, Apple has been heavily pushing AR software initiatives to developers over the past three Worldwide Developers Conferences, unveiling ARKitARKit 2.0, and ARKIt 3.0 in quick succession to expand coders’ access to augmented reality development tools. This week, the company announced the opening of an app design and development assistance office in China, notably beginning with AR-focused educational sessions to bring the country’s software community up to speed on its latest technologies.
Apple does not typically comment on unannounced projects, so there may be no official confirmation or denial of the DigiTimes report. We’ll update this article with more information if and when it becomes available.
Update at 1:20 p.m. Pacific: The full DigiTimes report is now available (translated), including claims from “people familiar with the situation” that Apple’s AR/VR headset team was disbanded in May, and its “original members were transferred to other product developments.” Specifically, the report suggests that the disbanding took place after Microsoft HoloLens co-creator Avi Bar-Zeev left his job leading Apple’s AR headset development team in January.
According to the report, industry speculation is that Apple may have struggled to make the glasses light enough, incorporate 5G networking, or get enough AR content for the glasses. However, DigiTimes offers scant additional details to support its claims, and suggests that the termination could be “temporary,” awaiting maturation of both the technology and content needed to produce the device.

Netflix taps Google Lens to bring Stranger Things newspaper ad to life

Google Lens / Netflix / Stranger Things 3
Above: Google Lens / Netflix / Stranger Things 3
Google Lens can be used to translate menus and highlight top meals, and it can tell you all about local art installations. Now Netflix is using it to promote the new season of Stranger Things.
First debuted in 2017, Google Lens is the internet giant’s computer vision-powered augmented reality-infused search tool that can recognize billions of entities in the real world, including animals and celebrities. Now, for one day only, those who buy a print edition of the New York Times will be able to point their phone’s camera at one of three Netflix ads for Starcourt Mall (a fictional mall in Stranger Things) and bring it to life.
Above: Starcourt Mall ad (New York Times)
Google is just the latest technology company to partner with Netflix in promoting the third season of its show — earlier this week Microsoft embraced 1980s’ nostalgia by launching a Windows 1.0 app that enables younger users to enjoy experience the early days of desktop computing — with Stranger Thingsdeeply embedded into the app.
Those wishing to unlock the latest Stranger Things ads in today’s New York Timeswill need Google Assistant on Android or the main Google app on iOS devices, while Lens is also available in the camera app on some Android phones.

CommonSense Robotics announces ‘world’s first’ underground micro-fulfillment center

CommonSense Robotics
Above: CommonSense Robotics' underground micro-fulfillment center.
Image Credit: CommonSense Robotics