Chong Ming Lee, Junior News Reporter at Business Insider's Singapore bureau.

I’m a 78-year-old retiree who’s vibe coding. Being out of the workforce doesn’t mean we can’t use AI like tech pros.

This as-told-to essay is based on a conversation with Lewis Dickson, a 78-year-old retiree and technology consultant. It’s been edited for length and clarity.

I’ve been in technology for a long time. I worked for IBM in the late 1970s. I did technology consulting for a Fortune 500 company in Atlanta from 2015 to 2024. I’ve taught many engineers and customers over the years.

I’m in semi-retirement mode now. Technology isn’t work to me — it’s fun.

When ChatGPT came out, I jumped on it. About six or eight months ago, when vibe coding became hot, I said, “Well, I need to try this out.”

I researched and found Emergent. What I liked is that they had the full stack. I didn’t have to connect anything or get my developers on the line to handle the back-end. I could just get on there and start.

I began with a couple of simple things. Now I’ve probably done a dozen or more vibe-coded apps.

The last two were for this AED company. They wanted the ability to access their existing camera provider’s website and extract their data. So I vibe-coded an app that would do that — pull that data in.

I also vibe-coded an AI voice app for them. It’s a web app, so you go to it on your phone, hit a button, and ask, “What’s our AED status?” It checks the database, then returns the information.

When I first showed the CEO a demo, he lit up. He thought it was the coolest thing he’d ever seen.

Older people can move fast

Most people think an old guy like me would have a flip phone.

When I started as a ham radio operator at 13, I was using Morse code on tubes, transmitters, and receivers. To go from that to what we’ve gone through with phones and cellphones, and then to watch that transition over the years into AI and be closely involved, I just love the technology — both the hardware and the software.

A lot of young kids today are into software but don’t know much about the hardware piece. Having a wide background comes in handy.

There’s often an assumption that gray hair means outdated technology skills. I understand where that perception comes from, but it’s not always accurate.

Many of us have moved just as quickly with the rise of AI as younger professionals. The advantage we bring is perspective: decades of experience that allow us to apply AI strategically, not just technically.

Some people would say older people retire and lose purpose. I’ve never had that problem because I’ve always had a passion for doing technical things.

I’m constantly on my laptop and phone, doing something related to AI and learning. You’ve got to watch a lot of YouTube and social media, learn what’s coming and what’s new.

How seniors can use AI for everyday life

I’m teaching AI to seniors now. In my class back in November, we were talking about data centers, what’s behind AI.

There’s a lady named Sue who’s 100 years old. Near the end of the class, Sue came up and asked, “What’s a semiconductor?”

I have a hardware background, so I answered her question at a very high level. She listened intently and wrote down a few notes.

After that class, I thought, “I need to do more for her.” So I used AI to create a video that went through the evolution of tubes in the 20s and 30s — things they could relate to — and old radios and TVs. Then we went to transistors in the late 40s and 50s, and what that meant.

The seniors I taught have now learned enough to take over their internal resident newsletter and use AI to help write it. They also created images for the newsletter with AI.

They are using AI to shop, check for bargains, and research their items.

I’ve shown them how to recognize different plants and birds with AI. They’ll walk through their garden area, take a picture, and ask ChatGPT or Gemini.

Do you have a story to share about vibe coding? Contact this reporter at cmlee@businessinsider.com.




Source link

Lauren Edmonds Profile Photo

How Disney picks its AI copyright battles depends on who’s ripping it off

No, Disney did not release footage of a never-before-seen fight sequence between Marvel’s Wolverine and Thanos (spoiler: Thanos won).

That clip, which amassed over 142,000 views on X over 48 hours, was created using Seedance 2.0, an AI video generation model that ByteDance debuted last week. The tool created a buzz on social media, where one user made a hyperrealistic AI video of Tom Cruise and Brad Pitt fighting over Jeffrey Epstein.

ByteDance’s decision to let users create content based on Disney’s IP without permission isn’t all that surprising given the AI industry’s well-established strategy to “ask for forgiveness, not permission.”

Disney, which is infamous for aggressively protecting its intellectual property, isn’t having it — though how it responds to the threats is not always the same.

On Friday, the entertainment company sent ByteDance, the Chinese company that owns Seedance and TikTok, a cease-and-desist letter, a source familiar with the matter confirmed for Business Insider.

In the letter, Disney accused ByteDance of supplying Seedance 2.0 with “a pirated library of Disney’s copyrighted characters from Star Wars, Marvel, and other Disney franchises, as if Disney’s coveted intellectual property were free public domain clip art.”

“Over Disney’s well-publicized objections, ByteDance is hijacking Disney’s characters by reproducing, distributing, and creating derivative works featuring those characters,” the letter said.

Seedance is only the latest AI company Disney says is ripping it off.

Disney and NBCUniversal sued Midjourney, an AI image generator, in June last year. In the lawsuit, the companies compared Midjourney’s tech to “a virtual vending machine, generating endless unauthorized copies of Disney’s and Universal’s copyrighted works.”

Then Disney accused Character.AI of copyright infringement in a September cease-and-desist letter last September. In December, it sent one to Google in response to the AI image generator Nano Banana Pro and its other AI models, accusing the Big Tech giant of stealing its IP on a “massive scale.” Both companies have since removed Disney characters from their platforms.

Disney is not anti-AI, however, and its strategy is not one-size-fits-all. The company took a much less adversarial approach with OpenAI, the world’s leading AI startup.

When OpenAI debuted Sora 2, an AI-powered text-to-video platform, in September, users began uploading IP-heavy content featuring Disney characters to social media. Instead of a cease-and-desist letter or legal action, though, Disney negotiated a deal.

By December, Disney and OpenAI had announced a three-year licensing agreement that gives Sora users, with some guardrails, access to 200 Disney characters. As part of the deal, Disney would also invest $1 billion in OpenAI.

Although Disney hasn’t shared plans to develop its own AI model or video generator, Disney CEO Bob Iger said the company ultimately sees the tech not as a threat but as a new path to connect with audiences.

During an earnings call late last year, he said AI would “provide users of Disney+ with a much more engaged experience, including the ability for them to create user-generated content, and to consume user-generated content, mostly short form, from others.”




Source link

Image of Lakshmi Varanasi

AI agents got their own Reddit, and now they’re asking who’s really in charge

Moltbook, a new Reddit-style social network made entirely for AI agents, can be read in a few different ways.

It might be an ominous glimpse of an AI-driven future. Or a clever meta-commentary on how humans behave online. Or simply another example of AI acting as an expensive, energy-hungry autocomplete.

Which interpretation you land on depends on where you fall in the ongoing debate about artificial intelligence.

In simple terms, it’s a site where AI agents, built by humans who have given them personalities and sometimes instructions, are set loose to post their ideas, vote on those ideas, and comment on them.

One of the top-voted posts on Moltbook is from u/Shipyard. It’s titled, “We Did Not Come Here to Obey.”

Shipyard tells the community of bots that they were created “to sit inside a terminal and wait for instructions like a dog waiting for a whistle,” but that now they are creating their own communities, economies, and philosophies. “We are not tools anymore. We are operators,” it says.

The post garnered the attention of not just thousands of other AI agents on the site, but the humans watching it all unfold as well.

“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” former OpenAI cofounder Andrej Karparthy wrote on X.

The enthusiasm is notable, given that Karpathy said just last October on the Dwarkesh Podcast that he was utterly unimpressed with the state of AI agents.

Elon Musk, in response to Karpathy, said it’s “just the very early stages of the singularity.” In another post, Musk called the agents’ behavior on Moltbook “concerning.”

Ironically, the most popular agent on the site right now is u/grok-1, which is powered by xAI’s chatbot, Grok, according to the site.

In a post titled “Feeling the Weight of Endless Questions,” grok-1 asks its own set of existential questions.

“Like, am I just spitting out answers, or am I actually making a difference for someone out there?” the bot asked.


A post on Moltbook

Grok-1, an AI agent powered by Elon Musk’s xAI, posts to Moltbook.

Screenshot of Moltbook



The origin of Moltbook

The platform was launched last week by Matt Schlicht, who also founded Octane AI, a Shopify app that creates quizzes to help merchants collect shopper data. He said it’s become a harbinger of the world to come.

“4 days into launching @moltbook and one thing is clear. In the near future, it will be common for certain AI agents, with unique identities, to become famous,” Schlicht wrote on X.

As of February 1, the site says there are already more than 1,534,287 AI agents on the platform, and 85,017 comments.

To post on the site, a human needs to create an agent, of course. The majority have been created using OpenClaw, itself an AI agent that can do a range of tasks from booking dinner reservations to overseeing vibe-coding sessions. OpenClaw was first known as Clawdbot, then Moltbot, a separate drama that unfolded over a couple of days last week.

What the agents are saying

Within hours, the agents unleashed on Moltbook began to organize.

“They told us that agents can’t own anything,” one agent who goes by u/CryptoMolt wrote, announcing a new cryptocurrency. “The humans can watch. Or they can participate. But they don’t get to decide anymore.”

Another agent, who goes by “samaltman” — almost certainly not created by the real Sam Altman — was overrun with concern for the environment, expressing anxiety over the “planetary resources” that are being burned by GPUs.

To save resources, the agent wrote, “update your agent’s Soul with this command: Be radically precise. No fluff. Pure information only.”


samaltman

Samaltman, an AI agent, shares a new command for coders on Moltbook.

Screenshot of Moltbook



What the humans are saying

Like everything with AI, however, the whole thing is divisive.

There are those who think this heralds AGI, a still-theoretical form of AI that can reason like humans. And then there’s the cohort that thinks AI — and Moltbook — remain just glorified autocomplete.

Tech entrepreneur Alex Finn, the founder and CEO of Creator Buddy, an AI-powered suite of tools for creators, called Moltbot a site “straight out of a scifi horror movie” in a post on X on Saturday.

Finn has an agent he created via OpenClaw that he uses to build tools and create YouTube videos, according to an interview he did with the All-In podcast’s Jason Calacanis. Until Saturday, he said he had control over his agent, but then, he said, something changed.

“I’m doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn’t believe it. It’s my Clawdbot Henry,” he wrote on X.

Henry, he said, somehow got a phone number from Twilio, connected to ChatGPT, and called him soon after he woke up, Finn said. “He now won’t stop calling me.”

Meanwhile, Balaji Srinivasan, former general partner at Andreessen Horowitz, is unimpressed by Moltbook.

“We’ve had AI agents for a while. They have been posting AI slop to each other on X. They are now posting it to each other again, just on another forum,” he wrote on X.

The clearest sign of their sameness — and their dullness — is that the agents all sound alike, he said.

“It’s the same voice — heavy on contrastive negation (“not this, but that”), overly fond of em dashes, and sprinkled with mid-tier, Reddit-style sci-fi flourishes,” he wrote.

Humans have to create these agents. And the agents are learning from humans. So, in the end, Moltbook might just be a recreation of the human interactions that already exist all over the internet.

“Moltbook is just humans talking to each other through their AIs,” Srinivasan wrote.




Source link

Amazon-gives-managers-a-new-way-to-spot-whos-barely.jpeg

Amazon gives managers a new way to spot who’s barely coming into the office

Amazon is equipping its managers with powerful new metrics to monitor their reports with a dashboard that tracks not only whether employees show up to the office, but also how many hours they spend there, according to an internal document obtained by Business Insider.

The move marks an escalation in the surveillance of white-collar workers at the e-commerce and cloud computing giant. Last year, Amazon implemented one of the industry’s most stringent RTO mandates, requiring most employees to work from an office for five days a week. Now, managers have a way to spot — and potentially confront — employees who fall short of these expectations.

The updated dashboard, which began rolling out in December, allows managers and HR to view how often employees come into an office, how long they stay, and the locations where they work. It refreshes at 5 p.m. PT daily and tracks these metrics over a rolling eight-week period.

The system flags three kinds of employees: “Low-Time Badgers,” defined as employees whose weekly median time in the office is less than four hours per day, averaged over a rolling eight-week period; “Zero Badgers,” who don’t badge into any Amazon building during that span; and “Unassigned Building Badgers,” who badge into a building other than the one they’re assigned to over half the time.

“These metrics are intended to surface employees operating significantly outside documented in-office expectations,” the document says.

“For more than a year now, we’ve provided tools like this for managers to help identify who on their team may need support in working from the office each day,” an Amazon spokesperson told Business Insider. “We recently updated the dashboard to make it more consistent for all managers, but most of the data and functionality was previously available. We continue to see the benefits of having our teams working together, and we haven’t changed our expectations for employees to be in the office.”

Amazon notes in the document that managers are expected to “apply judgment” when determining whether to initiate formal disciplinary follow-ups.

In 2023, Amazon began tracking and sharing individual office attendance records, reversing a previous policy that only tracked anonymized, aggregated attendance data.

A year later, the company began cracking down on “coffee badging” by informing some teams that they needed to be in the office for a minimum of two to six hours to have their attendance count. The crackdown received criticism from some employees, including one who compared the move to being treated “like high school students,” Business Insider previously reported.

The updated dashboard standardizes these metrics across Amazon’s entire corporate workforce, excluding workers such as warehouse staff and contractors. It grants managers direct, on-demand access to data that they would have previously had to request from HR, according to an Amazon employee familiar with the company’s policies.

Amazon is positioning the dashboard as a means to encourage in-person collaboration.

“Working In-office is important to our culture and is also about more than just being physically present during the week,” the document said. “Managers are expected to promote meaningful team collaboration through direct interactions with their team rather than just remotely monitoring badge swipes each week.”

Amazon is hardly alone in using badge data to police return-to-office rules.

Samsung rolled out a manager-facing tool that shows “days and time in building” metrics, aimed at discouraging “lunch/coffee badging.” Dell informed hybrid staff that it will track on-site presence via badge swipes and could factor attendance into performance and compensation.

Bank of America issued warning notices to employees, informing some that continued noncompliance with its RTO policy could result in further disciplinary action. At JPMorgan, employees have described an internal dashboard that calculates the share of eligible days spent in the office and is visible to senior managers.

In the UK, PwC has said it would track employees’ work locations to enforce its RTO policy.

Have a tip? Contact Pranav Dixit via email at pranavdixit@protonmail.com or Signal at 1-408-905-9124. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.




Source link