“February-17 is ASI”. This is the current X sentiment (on my TIL). I have been reading posts from different accounts claiming that Grok-3 will be the ultimate test for scaling laws. While this is true, we don’t know (yet) if it will be a GPT-3.5 type model or a reasoning model in which case scaling laws will still have TBD status.

The other main idea to pull at is: ASI. What does it mean for a machine to be ASI? Does it mean that it needs to follow the rules set by the humans or it should wander off thinking in tangents? Should it go deep in a topic, or should it be able to intuitively give smart answers? I think the answer here is more nuanced. My definition of asi would be a machine that can be sufficiently intelligent in any task at a level of an average human. This stems from my deep belief that humans are actually super-intelligent.

Will life be very different after we get super-intelligent machines? We will be better equipped to deal with problems such as climate change, poverty, diseases, natural disasters, but this does not mean that humans will not be at the forefront of these improvements. This idea that “AI will replace jobs” echoes the same sentiments as “computers will replace humans”. Everyone knows that the internet age ended up employing more people than before, albeit in a different space. I suspect the same thing will happen with AI.

I saw a recent All-in-podcast video where Naval said that if you go into any job and say “I understand AI” you will be hired. I don’t think he is wrong. Gen-Z workers that can best take advantage of the latest AI tools will have the most leverage in the coming years (maybe already?).

I agree with David Deutsch’s world model of KNOWLEDGE CREATION which empowers me to create a future that is beneficial and good for all living beings.