This article by Shivon Zilis on O'Reilly's website may be of interest. There's essentially no technical content (I think the author is a finance person with little technical expertise); it's a fairly fluffy high-level survey of the ecosystem, asking questions more like "what companies are doing this stuff and what are their products?" than like "what scientific advances have there been and how do they work inside?".

Here are a few quotations to give the flavour of the thing:

The two biggest changes I’ve noted since I did this analysis last year are (1) the emergence of autonomous systems in both the physical and virtual world and (2) startups shifting away from building broad technology platforms to focusing on solving specific business problems.

In last year’s roundup, the focus was almost exclusively on machine intelligence in the virtual world. This time we’re seeing it in the physical world, in the many flavors of autonomous systems: self-driving cars, autopilot drones, robots that can perform dynamic tasks without every action being hard coded. It’s still very early days—most of these systems are just barely useful, though we expect that to change quickly.

Similarly, researchers are doing things that make us stop and say, “Wait, really?” They are tackling important problems we may not have imagined were possible, like creating fairy godmother drones to help the elderly, computer vision that detects the subtle signs of PTSD, autonomous surgical robots that remove cancerous lesions, and fixing airplane WiFi (just kidding, not even machine intelligence can do that).

(Warning: one link in the last paragraph is to an article in the Daily Mail, which is a terrible terrible newspaper.)

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:14 AM

I think this bit was my favorite part of the article, because it points towards the sort of munchkinry that machine intelligence makes easier:

I can’t wait to see more combinations of the practical and eccentric. A few years ago, a company like Orbital Insight would have seemed farfetched—wait, you’re going to use satellites and computer vision algorithms to tell me what the construction growth rate is in China!?—and now it feels familiar.

I had first heard of this sort of project... three years ago, maybe? One sample use case then was counting cars in Wal-Mart parking lots to estimate sales figures to trade ahead of Wal-Mart earnings calls. It calls to mind the early commercial buyers of telescopes--merchants who would use them to see what ships are coming into harbor (and possibly price info displayed by those ships with flags), and then trade on the early knowledge.

Some [...] virtual agents are entirely automated, others are a “human-in-the-loop” system, where algorithms take “machine-like” subtasks and a human adds creativity or execution. (In some, the human is training the bot while she or he works) The user interacts with the system by either typing in natural language or speaking, and the agent responds in kind.

I'm most interested in the highlighted part. I think that the interaction is where the success of AI is decided - when it feels natural.

"The human trains the bot" ... I am reminded of the aperçu that the mouse trains the cat how to catch mice.