Sergey Brin on AI’s Impact on Careers: Should You Study Computer Science in 2026?

A short clip of Sergey Brin answering the “should I still study computer science?” question is going viral for a reason. It’s not hype. It’s a realistic answer from someone who has seen multiple technology shifts up close.

But the bigger story is the context around that clip.

This wasn’t a random interview or podcast moment. It was recorded live as the closing event of Stanford Engineering’s centennial year, with Brin back on campus in front of a room full of students. The discussion moved between Stanford’s role in Google’s origin story, lessons from building at scale, and what AI actually changes about education and careers.

The Stanford context matters more than the viral quote

The conversation starts by grounding Brin’s perspective in his student years. He talks about how creative and experimental Stanford felt at the time, how much freedom students had, and how Google itself grew out of research work done at Stanford.

That matters, because his AI comments aren’t coming from a place of fear or hype. They’re coming from someone who believes strongly in fundamentals, curiosity, and building things that actually work.

What Brin actually says about AI and careers

When the topic shifts to careers, Brin is very clear about two things.

First, predicting the details is hard. No one can say exactly which jobs disappear, which ones transform, or which new roles emerge.

Second, the direction is obvious. AI will change careers the same way the web and smartphones did. Not partially. Completely.

That framing is important. It’s not about “AI replacing everyone.” It’s about the definition of valuable work changing.

“Don’t run away from computer science”

A lot of people interpreted the clip as “AI is good at coding, so don’t study CS.” That’s not the message.

Brin’s point is almost the opposite: if you enjoy computer science, you shouldn’t avoid it just because AI can write code. He even jokes that switching to something like comparative literature won’t protect you from AI either.

He also makes an important distinction: when AI makes a mistake in creative or writing tasks, the consequences are usually small. When AI makes mistakes in code, systems break, data leaks, money is lost. That’s why human judgment still matters so much in engineering.

AI as an accelerator, not the owner of the outcome

Brin talks about how he personally uses AI: brainstorming, exploring ideas, generating multiple options quickly, knowing most of them will be mediocre but a few might be genuinely useful.

That’s the right mental model.

AI accelerates thinking, but it doesn’t own the result. You still need someone to decide what’s correct, what’s safe, what’s worth shipping, and what will hold up over time.

In real work, AI can be impressively smart and surprisingly wrong — sometimes in the same output.

What this means for web development in 2026

From the perspective of building real client websites and web apps, the shift is already visible.

Easy work gets automated first

Boilerplate code, basic layouts, repetitive components, simple integrations — these are increasingly handled by AI.

If your value is only “I can produce average code quickly,” that position is getting weaker.

Judgment becomes the differentiator

What stays valuable is the work that requires responsibility and experience:

  • translating unclear business needs into real technical decisions

  • understanding tradeoffs between speed, performance, cost, and complexity

  • debugging edge cases that only appear in production

  • security, privacy, reliability

  • UX details that make something feel solid instead of fragile

AI can generate code, but it doesn’t take ownership when things go wrong.

Fundamentals matter more, not less

As tools get more powerful, fundamentals become more important:

  • how the web actually works

  • performance, rendering, caching

  • data modeling and system design

  • reasoning about complex systems

This is why claims that “CS is dead” usually miss the point.

Even big companies don’t predict perfectly

One interesting part of the broader conversation is Brin openly acknowledging that Google hesitated in parts of the AI race. Concerns about models saying incorrect things slowed things down, while others pushed ahead faster.

That’s a useful reminder for students and developers: even the biggest companies don’t have perfect foresight. Trying to “pick the safest path” based on predictions rarely works.

Building durable skills does.

My take: should you study computer science in 2026?

If you’re making that decision:

  • If you enjoy engineering and problem-solving, computer science is still a strong path.

  • Just don’t assume the job will look the same as it did five or ten years ago.

  • Learn fundamentals, build real projects, and use AI daily — but never stop thinking for yourself.

AI won’t make web development obsolete.

It will remove the comfortable middle and reward people who can think clearly, adapt fast, and ship work they’re willing to stand behind.

Sorca Marian

Founder, CEO & CTO of Self-Manager.net & abZGlobal.net | Senior Software Engineer

https://self-manager.net/
Previous
Previous

Why MacBooks Are a Great Productivity Tool (Especially for Developers)

Next
Next

The Current AI “Winners” Have a Thin Moat (And 2026 Could Flip the Leaderboard)