Should you outsource your thinking to AI?

I might be contrarian, but at least it's intentional.

Should you outsource your thinking to AI?

I have this theory that three weeks into something new is when you reach the peak of overwhelm. You start to find your way around, and you’ve learnt a few names, but you still don’t understand much.

Starting a PhD is no different. I don’t think I’ve ever felt this stupid, and I realise that is a good thing — there would be no point in doing it if it didn’t stretch my thinking.

But to me, one of the most interesting aspects of the last few weeks has been the conversations about AI. Specifically, AI is a performance tool.

Take a group of 20 adults. Give them challenging tasks — understanding large volumes of scientific content within a short timeframe. What happens?

Many are at the beginning of their AI journeys. Dipping their toes into the possibilities: getting content summarised, asking for detailed information about papers, like specifying the research question.

Others are already heavily invested in AI tools, primarily ChatGPT and NotebookLM. The goal is to get help to process the volumes of content into understanding and gain a deeper understanding of what you’re actually trying to learn. AI helps you find themes and differences.

Neither of these uses is related to any examinations; it is used in the process that enables people to understand the content, allowing them to participate in seminar discussions.

But personally, I’m no longer in either of these two camps.

Sure, I could probably save some time by asking AI about how texts are similar. But I’m concerned about what that saved time might cost me further down the line. What skills am I not developing, or maybe even losing, because I outsourced them to AI? How many of the AI-heavy users are making that decision intentionally?

I’m still thinking a lot about my most popular Substack newsletter, “Is AI making me stupid?” And how I feel like we’re constantly trying to optimise for speed and convenience without really reflecting on the long-term effect of not thinking as much ourselves.

It’s not that I’m against AI altogether.

I have an AI prompt that I use to format the highlights I’ve made in books and articles into notes — very tedious and repetitive work. But I still do it manually for shorter texts, as going through my own highlights is a good way to understand what I just read.

But just like I’ve quit social media to save my brain from rotting, I feel like I need to protect my neural pathways from decay.

Yes, I might be slower. But who decided that faster is better? And if the end goal is to be so fast that everything is automated, then what will eventually be left?

I think education is a good premise for this discussion, because there is no shareholder value to generate, no quarterly goals or salary reviews.

It’s just me and my personal choice to learn something new.

Still, the narrative surrounding the decision to use AI is primarily one of capitalistic competition.

“If she uses AI and I don’t, I might get behind in life”.

As if life is a competition with winners and losers. And you’d better ensure to die a top performer.

I’m not saying life is never like that. Opportunities are sometimes unevenly distributed, and it can feel like we are “competing” against each other. But even if we set aside capitalism and the narrative it’s constantly feeding us, I still wonder if our productivity is what will ultimately put anyone ahead in life.

As with many other forms of knowledge work, the primary goal of a PhD is to think novel thoughts and communicate them effectively.

Currently, I’m not convinced that using AI is an effective way to develop those skills. Instead, I’m becoming increasingly confident that using AI is detrimental to those very abilities.

My personal goals are instead:

  1. Read daily
  2. Write one longer text weekly

It’s not too hard. When I stopped using social media, I gained a lot of time back, and I’ve been using it to read and write.

To be clear, I’m not against AI. We will benefit tremendously from using it to solve the critical problems of our time. But I don’t want to outsource my own thinking to machines.

Am I just getting old and resistant to change? Maybe. Let me know what you think in the comments.

Anna

Recommendations of the Week

MEME CULTURE and the POLITICAL RIGHT — After the murder of Charlie Kirk, mainstream media outlets seemed to struggle making sense of the cultural clues left by the shooter. This Instagram account explains the referenced memes and the Groyper wars, shedding a different light on the case.

CAPITALISM and OPINION — Americans’ support for capitalism is at a record low while views on socialism remain divided, with Democrats now favouring socialism over capitalism.

TECH and POLITICS — Microsoft is restricting internal discussions and tightening office access after employee protests over its business in Israel.

SOCIAL MEDIA and DEMOCRACY — Protests against political corruption and demands for free speech and social media access in Nepal have escalated into violent unrest, with protesters storming parliament, the prime minister resigning, and 19 people killed.

CONTENT and POLITICS — After Charlie Kirk was fatally shot at a public event, graphic videos of the incident spread widely on social media, revealing how platforms are struggling to enforce their own content rules and protect users from violent footage

EDUCATION and OPINIONS — The number of Americans who believe it is essential to go to college has hit an all-time low. The percentage of Americans saying college is “very important” has fallen to 35%.

AUTHOR PORTRAIT — Patricia Lockwood talks about grief, losing touch with reality during Covid, and finding humour and meaning in a chaotic online world.