photo-of-Ana-Altchek

An AI engineer hasn’t touched code since December. He’s excited about AI, but worries about the future.

This as-told-to essay is based on a conversation with Rohan Gore, a 38-year-old AI engineer at Reach3 Insights, a market research firm based in Vancouver. His identity and employment have been verified by Business Insider. The following has been edited for length and clarity.

I graduated with a computer science degree back in 2010, and I’ve worked in the industry since.

I started as a typical software engineer working on some of the interesting and complex problems in marketing research. Now, I’m an AI engineer.

I have mixed feelings about the impact of AI on the software engineering industry.

I completely handed off all my coding-related tasks to AI in December, and it did really well. I did not feel good about that initially. I’ve been coding for so long, and I realized at the time that coding is definitely gone.

I haven’t coded since then. That’s the new reality of my job. Just because AI has taken over my coding tasks, though, doesn’t mean I can play outside all day. I’m still able — and expected — to produce the same level of output and quality of work. Sometimes I feel burned out because the expectation is that I should be doing more work, even though AI can take over some tasks.

I’m excited about AI and enjoy how it’s changed my job

Right now, AI needs a lot of guardrails, and I believe that my background and systems knowledge still make me pretty useful.

I’m happy that I haven’t coded in three months because there’s a lot that I’m doing, like software architecture and design, that isn’t going anywhere. AI can help architect or design, but it needs a lot of hand-holding today. That makes the knowledge of software engineering more important than ever in the age of AI — at least for now.

There is also a lot of systems thinking that needs to be applied, which I love and am completely in harmony with, so it’s a good state for me.

I’ve been coding for years, and at the end of the day, it’s a means to an end. I never saw it as rocket science. But there is a lot of nuance to coding, which can be frustrating and tiring to work with at times. So, I’m enjoying this next era.

AI also lets me do a lot more research, and faster. It allows me to question product decisions and think more, rather than just execute. As an engineer, I was constantly under delivery pressure, but now it frees me up and actually allows me to critique what a project manager is doing, because I understand the product decisions that are being made. It helps me take on a broader product engineering role, which I’m enjoying.

I’m feeling happy that I can deliver at this pace, because that wasn’t possible earlier. It’s cool that I can make a feature in two or three days instead of a month. That’s a crazy transformation that I feel happy and excited about.

I’m concerned about the future

Even though I’m enjoying the current state, there’s always this behind-the-scenes thought of, “Ok, what’s next?” The technology is getting better every day. I’m not comfortable with AI being in a state where it can run on its own forever. I don’t know what I would do in that scenario.

It feels weird that the job has changed so much. Sometimes I find myself speechless. I have so many thoughts and emotions going on. Most of them have turned into excitement, but the more I think about it, the more it turns into fear. Sometimes I felt intimidated because these agents are so powerful.

I even openly said in my company’s Slack that there’s no way in my lifetime I could’ve coded something even 10% as good as these agents. At the end of the day, if you look at a typical problem, most humans are no match for solving them, unless you’re talking about the 1% geniuses.

Sometimes I feel defeated because coding was a skill I acquired over time and it took a lot of time to get to a state where I could do that well. It’s not that I don’t like the change, but there’s a fear there.

What happens if all of this gets completely automated and people just ask AI for things?




Source link

Shuby headshot

An Accel VC says the vibe coding market is big enough for Cursor and Claude Code

The great vibe coding war of 2026 isn’t the bloodbath it appears, says a venture capitalist whose firm backed Cursor.

On an episode of the “20VC” podcast released on Monday, Miles Clements, a partner at VC firm Accel, said that the AI-assisted coding industry is big enough for Anthropic’s Claude Code and Cursor.

After Anthropic released its latest model, Opus 4.6, last month, founders and developers said on X that they are ditching Cursor for Anthropic’s Claude Code.

“This market is growing enormously, and I don’t think a lot of these companies are actually experiencing success at the expense of the others,” Clements said.

Cursor, founded in 2022, was valued at $29.3 billion late last year. Accel first invested in the AI coding startup in June and co-led its $2.3 billion Series D round in November.

On the podcast, Clements called Claude Code an “amazing product.” Still, he said, there are two reasons Claude’s latest improvements don’t hurt Cursor.

“First of all, they’re bringing so many new cohorts of users online, so people who would not have been software developers a year ago today can be software developers with these tools,” he said.

Second, the market is expanding because consumption per customer is increasing, Clements added.

Last week, Chamath Palihapitiya, a VC and the founder of software incubator 8090, said that Cursor was one of his company’s biggest AI costs.

“We need to migrate off of Cursor,” he wrote on X. “It’s just too expensive vs Claude Code. The latter is equivalent, and if you use the Pro plan, you eliminate huge Cursor bills for token consumption.”

Cursor did not respond to a request for comment from Business Insider.

On a podcast released in late February, Insight Partners cofounder Jerry Murdock said that Cursor is behind its peers.

“Most of the companies I mentioned, their view is that Cursor is obsolete today,” he said. “I think those guys are going to have to quickly embrace autonomous agents.”

On Monday’s podcast, Clements countered Murdock’s remarks.

“Like, all due respect, I thought about playing in the NFL, but instead I walked onto a college football team and was the fifth-string inside linebacker,” he said. “You’re not looking at any real metrics. Like, who are these people to make these judgments?”

A representative for Murdock did not immediately respond to a request for comment.




Source link

Sam-Altman-says-OpenAI-has-gone-code-red-multiple-times.jpeg

Sam Altman says OpenAI has gone ‘code red’ multiple times — and they’ll do it again

“Code red” isn’t a one-off at OpenAI.

CEO Sam Altman said on an episode of the “Big Technology Podcast” published Thursday that the company has entered emergency mode multiple times in response to competitive threats — and expects to continue doing so as rivals close in.

“It’s good to be paranoid and act quickly when a potential competitive threat emerges,” Altman said.

“My guess is we’ll be doing these once maybe twice a year for a long time, and that’s part of really just making sure that we win in our space,” he added.

Altman said that OpenAI had gone “code red” earlier this year when China’s DeepSeek emerged. DeepSeek shocked the tech industry in January when it said its AI model matches top competitors like ChatGPT’s o1 at a fraction of the cost.

OpenAI entered “code red” earlier this month, about two weeks after Google released its latest AI chatbot, Gemini 3. The model drew widespread praise after its release in November, with Google touting it as its most advanced model to date. Altman reportedly told staff in an internal Slack memo that OpenAI would prioritize ChatGPT while pushing back other product plans.

Altman said in the podcast episode that Google’s Gemini 3 did not have “the impact we were worried it might.”

“But it did — in the same way that Deepseek did — identify some weaknesses in our product offering strategy, and we’re addressing those very quickly,” he added.

Since OpenAI entered “code red,” the company has moved quickly to ship new upgrades and features.

Last week, it rolled out a more advanced AI model aimed at improving ChatGPT’s performance across professional work, coding, and scientific tasks. OpenAI also unveiled a new image-generation model earlier this week.

Altman said the company will not be in code red “that much longer.”

“Historically, these have been kind of like six- or eight-week things for us,” he added.

The state of “code red” has also been a precedent for other tech companies. In 2022, Google declared an internal “code red” after ChatGPT’s debut. The search giant was lagging in consumer AI, despite having funded much of the research that made the AI boom possible.




Source link

Henry Chandonnet is pictured

The creator of Anthropic’s Claude Code likes to hire engineers who do ‘side quests’ like making kombucha

Want a job at Anthropic? It might help to get a hobby.

The AI boom is changing the job requirements for an engineer. Not only do they need to have coding skills, but they also must know how to operate vibecoding tools and stay up to date with new AI models.

Anthropic leader Boris Cherny looks for something else: “Side quests.”

“When I hire engineers, this is definitely something I look for,” he said on “The Peterman Pod.”

Cherny’s definition of side quests includes “cool weekend projects,” like someone who’s “really into making kombucha.” It’s a sign that the engineer is curious and interested in other things, he said.

Much of Cherny’s own growth came from his side projects. Cherny is now a key figure at Anthropic. He created Claude Code, a tool that is now popular with engineers across the country.

“These are well-rounded people,” he said. “These are the kind of people I enjoy working with.”

Cherny also said he prefers that his new hires be “generalists.”

He gave the example of an engineer who can code, but is also able to work on product and design. That all-star engineer also seeks out user feedback.

“This is how we recruit for all functions, now,” he said. “Our project managers code, our data scientists code, our user researcher codes a little bit.”

Cherny isn’t alone in pushing for jobs to become more generalist. Figma CEO Dylan Field said in October that AI was causing job titles to merge, resulting in everyone being a “product builder.”

What else is Anthropic looking for? For some time, it monitored whether candidates use AI in their applications.

In May, Business Insider reported that Anthropic asked candidates for certain jobs not to use AI in their written responses so the company could test their “non-AI-assisted communication skills.”

Anthropic changed its policy in July, allowing candidates to seek out assistance from Claude.

For the younger engineers, a job at Anthropic may be hard to come by. In May, CPO Mike Krieger said on “Hard Fork” that he was focused on hiring experienced engineers — and had “some hesitancy” with entry-level workers.

On the podcast, Cherny said that his love of generalists came from his career trajectory. Working at startups since 18, Cherny had to do everything, he said.

“At big companies, you get forced into this particular swim lane,” he said. “It’s just so artificial.”




Source link