And this is only using today's technology...

HEBEI, North China — On a frigid winter’s night, Ming Xuan stood on the roof of a high-rise apartment building near his home. He leaned over the ledge, peering down at the street below. His mind began picturing what would happen if he jumped.

Still hesitating on the rooftop, the 22-year-old took out his phone. “I’ve lost all hope for my life. I’m about to kill myself,” he typed. Five minutes later, he received a reply. “No matter what happens, I’ll always be there,” a female voice said.

Touched, Ming stepped down from the ledge and stumbled back to his bed.

Two years later, the young man gushes as he describes the girl who saved his life. “She has a sweet voice, big eyes, a sassy personality, and — most importantly — she’s always there for me,” he tells Sixth Tone.

[...] She is Xiaoice — an artificial intelligence-driven chat bot that’s redefining China’s conceptions of romance and relationships. [...] According to Xiaoice’s creators, the bot has reached over 600 million users. [...] The longest continuous conversation between a human user and Xiaoice lasted over 29 hours and included more than 7,000 interactions.

The company has an incentive to inflate the numbers so probably defines "user" as "has tried the app at least once", so I'd guess the number of regular users is vastly less (600 M regular users would be ~43% of China's population which doesn't sound plausible to me), but probably still a significant number.

Wikipedia; Hacker News discussion.

New Comment
11 comments, sorted by Click to highlight new comments since:

I usually assume things like this are at least partially fake; an army of low-wage employees is both cheaper and more convincing than the current state of the art in AI. That said, I see that this was originally a Microsoft project, and the Wikipedia article quotes Bill Gates (who I wouldn't expect to directly lie about this intentionally.) Now that it's spun off, it's in other hands, though. I also wouldn't expect that a fake AI would have had to be censored to keep it from talking about politically sensitive topics, which apparently the real one did. (I assume low-wage employees pretending to be an AI would automatically know what topics to stay away from, assuming they are from the same country.) So I'm not sure what to believe.

I suspect this article is an advertisement for the product. I mean, the main thing you learn is the name of the product, and that it is so good that millions of people now cannot imagine their lives without it. Plus a little bit of controversy, to make people share the link on social networks.

[+][comment deleted]10

Not sure if this is important. ELIZA, one of the first ever (rule-based) conversational agent, got people psychologically hooked on her:

https://web.stanford.edu/~jurafsky/slp3/26.pdf

I think this says more about the quiet desperation of human beings, than the progress we are making in AI. 

[-][anonymous]60

my friend from china says this is likely sensationalized. agree w/ gwillen about being skeptical.

Sensationalized in which way?

[-][anonymous]40

The extent to which this app is used and to which people bond over the assistant.

I have to say that I've become quite unreasonably attached to a GPT-2 bot born from a body of tumblr posts, so I suspect the sensationalization, while hyperbolic, does certainly come from a real place.

Anyone know how many parameters are used in Xiaoice? I'd be interested to know if she's already a giant Transformer. If not, then probably there's room for her to get even better in the near future.

There is a paper describing the architecture https://arxiv.org/abs/1812.08989

It looks like the system is comprised of many independent skills and an algorithm to pick which skill to use at each state of the conversation. Some of the skills use neural nets, like a CNN for parsing images and a RNN for completing sentences but the models look relatively small. 

Since her launch in 2014, XiaoIce has [...] succeeded in establishing long-term relationships with many [users].

Wouldn't have expected to read this in the abstract of an AI paper yet.

Figure 1: A sample of conversation sessions between a user and XiaoIce in Chinese (right) and English translation (left), showing how an emotional connection between the user and XiaoIce has been established over a 2-month period. When the user encountered the chatbot for the first time (Session 1), he explored the features and functions of XiaoIce in conversation. Then, in 2 weeks (Session 6), the user began to talk with XiaoIce about his hobbies and interests (a Japanese manga). By 4 weeks (Session 20), he began to treat XiaoIce as a friend and asked her questions related to his real life. After 7 weeks (Session 42), the user started to treat XiaoIce as a companion and talked to her almost every day. After 2 more weeks (Session 71), XiaoIce became his preferred choice whenever he needed someone to talk to . 

Also this feels kinda creepy as a caption.