Realigning with AI
I was going through Kaustav’s website when I saw this post where he was realigning himself with AI and that’s when I realized I should also realign. We were talking yesterday and that had led to the textbooks in this MIT course getting added to my reading list.
I had so many thoughts about AI while going through the posts he linked to on Wait But Why.
It is not distant in the future when your calendar application will automatically call a self-driving car to be receiving you when you come out of your office so that you can go to the next place you want to. And car sharing would be done automatically by these all knowing systems such that you’d probably be sharing these cars with a 3 others. (This is almost a reality. Uber/Ola already has car sharing. All it lacks is automatic driving and automatic booking)
I also thought about the multi-platform bot which I wanted to build.1 One feature that would be essential for this bot is its ability to connect accounts of same people on different platforms to the same entity. While it would be trivial to do this with associated email address, let’s think of how it’d happen if we had an AI. The bot can analyze a lot of data (location, links, photos) to figure out connections between accounts. But let’s say, for the sake of it, these data are not available. How would it then figure out which two accounts belong to the same person?
Specifically, can people be recognized from their ideas? (Now this is moving away from AI) If my posts on one platform are very similar to my thoughts on another platform (if I use the same words, same thought patterns, etc.) would it be possible to identify it’s me on both the platforms? (Of course, in today’s social networks this wouldn’t be easy. For our Instagram personas are not our Facebook personas and so on) Let’s assume that is possible. So, a bot might probably give a score of probability of how sure it is that account A and account B are the same. For my accounts A & B, the score would be pretty close to 1 (100%).
Now imagine if it analyzes many accounts and finds people who have ideas very similar to mine. It would not be easy for the bot to discern between those accounts. It might give a score of 0.9 for my account A and the account C of someone who thinks just like me.
That creates two interesting thoughts.
- That would make a brilliant suggestion algorithm for social networks to suggest friends/followers for users.
- And this is what I find fascinating. We usually think of people as separate entities. But, usually, people are not entirely different from each other. The thoughts that I have and the thoughts that one of the people whom I hang out with regularly could be very similar. In fact, I might influence them and they might influence me such that it could be considered that we are both the same, at least on that thought. And therefore, it might be possible to predict what my thoughts on a particular situation are according to what the other person’s thoughts are.
And that latter thought can extend into psychology. It is not difficult to find a group of people having the same thought.
It was yesterday that I realized that despite my interest in AI I hadn’t done any groundwork to read the basics of AI. So I decided that I would stand on the shoulders of giants and not keep thinking all alone.
Coming back to the WBW post. The first post titled Road to Superintelligence helped me give names to the concept of AI that I’m interested in - Artificial General Intelligence. It also talked about Artificial Superintelligence. ASI seems to be very tricky because even thinking about its possibilities are like thinking about superpowers.
And then, there are three ways to achieve AGI.
We can just try to replicate the brain in its entirety. And they did it with C. elegans already.2 But I have a hunch that if we go through this route, we would have just created an electronic brain while still not having figured out what intelligence is. (And it would be a recipe for disaster too!).
And the second and third options seems to be far less promising. Genetic algorithms are more or less random. And letting computers build AI is like running away from the problem.
In fact, WBW seems to be pretty optimistic about how AGI can recursively improve itself and become ASI. I have another hunch that unless we figure out intelligence, it would not be possible to build a system that can improve its own intelligence. For example, humans can’t improve human intelligence.
And in part 2 of the post Our Immortality or Extinction they go on with the trend, betting on Ray Kurzweil’s prediction that we’ll soon have ASI. But I still don’t understand how there can be an AGI that can improve itself without us first figuring out what intelligence is. (Just as in the Gray Goo story. Without first figuring out how to create a nanobot that can replicate itself, we don’t have to worry about it turning everything into gray goo)
Anyhow it’s an intellectual pleasure to think about how cool things would be (or not) after we have ASI. But getting AGI is the difficult part (getting AGI with an understanding of I is even more difficult).
And I am curious (extremely) about understanding intelligence.
-
It is a bot that has accounts on multiple social networks. It follows people, checks feeds, listens to me, listens to others, and talks to me or others intelligently. ↩
-
This one ends with a nice paragraph: “Technologically advanced civilizations will eventually make simulations that are indistinguishable from reality. If that can happen, odds are it has. And if it has, there are probably billions of simulations making their own simulations. Work out that math, and the odds are nearly infinity to one that we are all living in a computer simulation.” ↩