How language models are filling an emotional void.
Exploring the thin line between tool and companion.
I can’t forget the faces of the participants when in one of the workshops I admitted straightforwardly that I had battled depression and anxiety for many years.
It was some kind of a mix of surprise, empathy, and understanding. I immediately saw the internal relief on some of these faces.
This one, tough confession created an instant, weird emotional connection between us.
Then I took a breath and slowly moved on. I wanted to show people how they can incorporate LLMs, or AI in general into their thought-wellbeing routine.
This was an idea that buzzed in my head for a while, and I’ve tested it thoroughly using various techniques.
One, that I found particularly useful is as I like to call it - The Stage.
Imagine the stage, that squeaks a bit, and smells like a mix of old wood, and dust. Like this one in the theatre. You’re in the center, with the light popping into your face that’s forcing you to close your eyes a bit.
When your eyes get used to the light, you look around and see the chairs with different people sitting on them. All of them are looking at you.
I can’t remember their names but I know that one of them is a business coach, the second one is a neuropsychologist, the third one is a diabetologist… and the other ones worked with various tech startups.
All of a sudden the person on your left asks you a question. You answer.
Then you get the question from the right. This time it’s contrary. You don’t have the answer right away.
But you’ve finally managed to end the round of these questions. Silence.
After a long 30 seconds, any of these experts, one by one are starting to give you their observations. Thinking patterns, behaviors, deconstructed concepts, and different angles that you didn’t discovered yet.
Unlike the reflection in the mirror each morning, LLMs can see me more objectively. They can spot the things that I didn’t notice. All based on one prompt, and a couple of minutes navigating through the questions.
Is it a replacement for professional help? Absolutely not. But as a "bandaid for smaller problems," it's incredibly effective. It's helped me overcome mental blocks and recognize when my thinking isn't grounded in reality.
Would you like to give 'The Stage' a try?
🗝️ Quick Bytes:
OpenAI announces SearchGPT, its AI-powered search engine
OpenAI is reportedly developing a new AI-powered search engine called SearchGPT, which could potentially challenge Google's dominance in the search market. According to sources familiar with the project, OpenAI has been acquiring large amounts of web data and is working on an advanced language model specifically designed for search purposes. This move aligns with OpenAI's broader strategy to expand its AI capabilities beyond chatbots and into more practical applications.
While details about SearchGPT's features and launch timeline remain unclear, the project is seen as a significant step for OpenAI in competing with other AI companies. The development of SearchGPT also raises questions about OpenAI's relationship with Microsoft, a major investor and partner. As OpenAI ventures into the search domain, it could potentially compete with Microsoft's Bing search engine, which already incorporates some of OpenAI's technology.
Meta releases the biggest and best open-source AI model yet
Llama 3.1 405B, the flagship model in this release, is positioned as the world's largest and most capable openly available foundation model. The model demonstrates improved contextual understanding, more nuanced language generation, and enhanced reasoning abilities compared to its predecessors.
When compared to competitors, Llama 3.1 405B is reported to be competitive with leading foundation models like GPT-4, GPT-4o, and Claude 3.5 Sonnet across a range of tasks. Its open-source nature allows for full customization, giving developers the flexibility to adapt it for specific needs and applications. The model also offers some of the lowest cost per token in the industry, according to external testing.
Additionally, Llama 3.1 introduces upgraded 8B and 70B models with multilingual capabilities, a significantly longer context length of 128K tokens, and improved tool use and reasoning capabilities, positioning it as a strong contender in various AI applications such as long-form text summarization, multilingual conversational agents, and coding assistance.
AI achieves silver-medal standard solving International Mathematical Olympiad problems
DeepMind has made significant progress in artificial intelligence for mathematical problem-solving, with their AI models achieving a silver-medal standard in tackling International Mathematical Olympiad problems.
The company developed two specialized systems: AlphaProof, which uses reinforcement learning for formal mathematical reasoning, and AlphaGeometry 2, an improved version of their geometry problem-solving AI. Together, these models successfully solved four out of six IMO problems, scoring 28 points, which is equivalent to a silver medal performance.
🎛️ Algorithm Command Line
I can't stop thinking about this, and I don't see much attention around it.
Harari touched on a very interesting point in a conversation with Lex Fridman (I encourage you to watch the full podcast because it's filled with a lot of gold).
One of the reasons that people got so obsessed with LLMs is their pure ability to simulate real conversation, real care, and the feeling of listening to everything that you say or write. It's a thin red line between seeing AI as a tool and having an intimate relationship with a machine.
Remember the "Sky" voice that OpenAI presented this year during a keynote?
I was fascinated by the reaction of people to it - they were anthropomorphizing it almost instantly. Reddit was full of threads filled with people having an immense "thirst" to have a conversation with a new voice blended on top of the LLM.
It was so natural, so comforting, and even flirty at times. We've got access on a scale to a solution that not only grabbed our attention but also simulated some kind of a safe space to be understood.
Isn't that saying something about our current condition as human beings?
In the recent podcast that I had with Beata Matras, I shared that I sometimes use LLMs for my mental well-being - on different levels. One is to find negative patterns, one is to comfort myself with the harder decisions I try to make, and another one is often to try to make my imposter syndrome go away.
It's unbelievable how well it works.
Because the machine doesn't have emotions (yet), and what's more important - it doesn't have the built-in bias about yourself. Like the one you have when you look in the mirror.
Can it have the positive abilities to help? Of course, it can. But this is also a very dangerous territory.
I am curious about your perception of that - what do you think? Have you noticed any of these patterns within your interactions with LLMs? What feelings do you have?
Let that soak in.
🗞️ Longreads
‘In the age of AI, software creators, like content creators, will emerge as the industry’s non-professional creative class.’