Try Things

A couple weeks ago, I asked around facebook and discord for people willing to test out Polis, an interactive survey tool.

Polis appealed to my aesthetic. It hosts a conversation where participants submit twitter-size comments on a topic which other participants vote on by clicking "agree", "disagree", or "pass". It uses these votes to cluster participants into like-minded groups and identifies "consensus" points about which all the clusters agree.

I saw a lot of promise in their model. Social media tends to addict and polarize, and I am constantly seeking for healthier alternative technologies. I had hope this could enable positive communication and group problem-solving.

So I made a "conversation" in polis, just to try it out.

The resulting survey report is here: https://pol.is/report/r7dr5tzke7pbpbajynkv8. I'm proud of myself for taking a step forward on my ideas. I tested my concept, now I have concrete data to iterate on. I learned its potential much more quickly than if I'd gone on theorizing about it.
 

Afterthoughts

Polis is, first and foremost, a tool for doing good data science. It's a simple user interface, cleverly optimized to collect a sparse matrix of evolving survey data, tied with a few automatically generated visualizations of the data. This became clearer as I engaged with the platform and its quirks.


I will not go deeply into the results of the survey I ran. The highlights:

  • Disagreed (>60%)
    • Everybody else has the exact same name as me.
      • (negative calibration comment)
    • we're doing fine on coordination already
  • Agreed: (>60%)
    • Spending too much time on discord take[s] away from being productive
    • In-community dating is normal and acceptable
  • Passed (>30%)
    • Rats are slans
    • Lesswrong codebase is very impressive
    • Prompt I want to hear from people on: What specific social media incentives are pulling apart the community, and in what ways
    • Lesswrong codebase is overly complicated
      • (Depending on how many readers know how to code, I may want to write a basic tutorial for using the Lesswrong API. I'm big on empowering end users to tinker.)
    • The meetup zooms are a net positive

I spent some time analyzing both the automated report and the raw data, and mostly came to the realization that the survey needed better 'question' design and more stable responses to make useful inferences. Earlier participants never returned to vote on comments submitted by others later. I don't know that the people I managed to reach are particularly representative either. The kind of scale polis usually operates can overcome sparsity in answers with clever sampling and interpolation; such surveys involve hundreds of participants and thousands of comments. I did not attract that kind of audience.

Facebook integration didn't work. A good chunk of the lesswrong diaspora went to facebook, so this frustrated me quite a bit. Since I did not conduct a carefully sampled study or anything, I even more would've valued the individualized element of seeing what different opinion groups exist among my friends.

Learning More

I recently had the opportunity to video call with a person from the polis team. We had a very fun and satisfying conversation. But on that call, I learned that the features of polis which drew my attention were almost unintentional; they were slapdash artifacts of the very first prototype the team could throw together. The polis team intends to focus on technical data analytics aspects and large project deployment in the future. A resource to keep in mind for the next LW/SSC survey, perhaps.

Takeaways

I expected my first try at Polis to suck. It did. That much does not surprise me.

Ultimately, having a platform like polis does not seem as pressing and necessary as it felt  to me at the time I tried it. Lesswrong provides enough functionality:

  • poll comments
  • question posts
  • predictions

in this vein that I don't see it proving worthwhile to make more polis conversations. There is more to try.

New Comment
3 comments, sorted by Click to highlight new comments since:

Technology that "factors stance space" as pol.is tries to do and finds consensus excites me!

I'm very sympathetic to the idea that the ability of modern western countries to cohere / find consensus is a bottlenecked lever in progress. Finding pareto optimal sources of agreement may be a good way to help this.

I am curious about the difference between "having a good, deep conversation" and "coming to a conclusion".    I have a feeling that these two goals at odds.  Polis sounds like it's targeted at the second goal - but for the owner of the conversation to come to a conclusion, not the participants collectively.

I am seeing the need for better question design in all the collaborative discussion tools I'm looking at, but that kinda (ahem) begs the question.  What is the process that gets to well-designed questions?

For designing questions to reach conclusions, let's not reinvent the wheel. I'm sure plenty of resources already exist on good survey design.

Meanwhile, I'm not even sure having good questions matters overmuch for conversations. Questions are a starting point, and the meat of collaboration is in synchronously moving from the starting point. The best questions are gonna be ice-breakers, improv warmups, and in-my-culture questions.. (in-my-culture: category of questions which bring core behavior/values differences into sharp focus and make them translatable through a common idiom.