AI Submissions for Fri Feb 13 2026
I'm not worried about AI job loss
Submission URL | 305 points | by ezekg | 500 comments
David Oks pushes back on the viral “February 2020” AI panic sparked by Matt Shumer’s essay, arguing that while AI is historically important, it won’t trigger an immediate avalanche of job losses. He contends real-world impact will be slower and uneven, and that ordinary people will be fine—even without obsessively adopting every new tool.
Key points:
- The panic: Shumer’s “COVID-like” framing and prescriptions (buy AI subscriptions, spend an hour a day with tools) went massively viral—but Oks calls it wrong on the merits and partly AI-generated.
- Comparative vs. absolute advantage: Even if AI can do many tasks, substitution depends on whether AI-alone outperforms human+AI. Often, the “cyborg” team wins.
- Why humans still matter: People set preferences, constraints, and context (e.g., in software engineering), which AI agents still need; combining them boosts output and quality.
- Pace and texture: AI advances fast in demos, but deployment into messy organizations is slow and uneven. Expect change, not an overnight “avalanche.”
- Bottom line: Human labor isn’t vanishing anytime soon; panic-driven narratives risk causing harm through bad decisions and misplaced fear.
Here is a summary of the discussion:
Shifting Skills and Labor Arbitrage Commenters debated the nature of the "transition period." While some agreed with the article that AI removes mechanical drudgery (like data entry) to elevate human judgment, skeptics argued this ultimately acts as a "leveler." By reducing the "penalty" for lacking domain context, AI shrinks training times and simplifies quality control. Several users warned this facilitates labor arbitrage: if the "thinking" part is packaged by AI and the "doing" is automated, high-level Western jobs could easily be offshored or see salary stagnation, causing a decline in purchasing power even if headcount remains flat.
The "Bimodal" Future of Engineering A strong thread focused on the consolidation of technical roles. Users predicted that specialized roles (Frontend, Backend, Ops) will merge into AI-assisted "Full Stack" positions. This may lead to a bimodal skill split:
- Product Engineers: Focused on business logic, ergonomics, and customer delight.
- Deep Engineers: Focused on low-level systems, performance tuning, and compiler internals. The "middle ground" of generic coding is expected to disappear.
The Myth of the 10-Person Unicorn Participants discussed the viral idea of "10-person companies making $100M." Skeptics argued that while AI can replicate code and product features, it cannot easily replicate sales forces, warm networks, and organizational "moats." Historical comparisons were made to WhatsApp (55 employees, $19B acquisition), though users noted those teams were often overworked outliers rather than the norm.
Physical Automation vs. Software A sub-discussion contrasted software AI with physical automation, using sandwich-making robots as a case study. Users noted that economic success in physical automation requires extreme standardization (e.g., rigid assembly lines), whereas current general-purpose robots lack the speed and flexibility of humans in messy, variable environments. This provided a counterpoint to the idea that AI will instantly revolutionize all sectors equally.
OpenAI has deleted the word 'safely' from its mission
Submission URL | 555 points | by DamnInteresting | 278 comments
OpenAI quietly dropped “safely” from its mission as it pivots to a profit-focused structure, raising governance and accountability questions
- What happened: A Tufts University scholar notes OpenAI’s 2024 IRS Form 990 changes its mission from “build AI that safely benefits humanity, unconstrained by a need to generate financial return” to “ensure that artificial general intelligence benefits all of humanity,” removing both “safely” and the “unconstrained by profit” language.
- Why now: The wording shift tracks with OpenAI’s evolution from a nonprofit research lab (founded 2015) to a profit-seeking enterprise (for‑profit subsidiary in 2019, major Microsoft funding), and a 2025 restructuring.
- New structure: Per a memorandum with the California and Delaware attorneys general, OpenAI split into:
- OpenAI Foundation (nonprofit) owning about one-fourth of
- OpenAI Group, a Delaware public benefit corporation (PBC). PBCs must consider broader stakeholder interests and publish an annual benefit report, but boards have wide latitude in how they weigh trade-offs.
- Capital push: Media hailed the shift as opening the door to more investment; the article cites a subsequent $41B SoftBank investment. Earlier late‑2024 funding reportedly came with pressure to convert to a conventional for‑profit with uncapped returns and potential investor board seats.
- Safety signals: The article highlights ongoing lawsuits alleging harm from OpenAI’s products and notes (via Platformer) that OpenAI disbanded its “mission alignment” team—context for interpreting the removal of “safely.”
- Governance stakes: The author frames OpenAI as a test case for whether high-stakes AI firms can credibly balance shareholder returns with societal risk, and whether PBCs and foundations meaningfully constrain profit-driven decisions—or mostly rebrand them.
- The bottom line: Swapping a safety-first, noncommercial mission for a broader, profit-compatible one may be more than semantics; it concentrates power in board discretion and public reporting, just as AI systems scale in capability and risk. For regulators, investors, and the public, OpenAI’s first PBC “benefit report” will be a key tell.
Here is a summary of the discussion on Hacker News:
Historical Revisions and Cynicism The discussion was dominated by skepticism regarding OpenAI's trajectory, with users drawing immediate comparisons to Google’s abandonment of "Don't be evil" and the revisionist history in Orwell’s Animal Farm. One popular comment satirized the situation by reciting the gradual alteration of the Seven Commandments (e.g., "No animal shall kill any other animal without cause"), suggesting OpenAI is following a predictable path of justifying corporate behavior by rewriting its founding principles.
Parsing the Textual Changes
Several users, including the author of the analyzed blog post (smnw), used LLMs and scripts to generate "diffs" of OpenAI’s IRS Form 990 filings from 2016 to 2024.
- The "Misleading" Counter-argument: While the removal of "safely" grabbed headlines, some commenters argued the post title was sensationalized. They noted the mission statement was reduced from 63 words to roughly 13; while "safely" was cut, so was almost every other word, arguably for brevity rather than malice.
- The Financial Shift: Others countered that the crucial deletion was the clause "unconstrained by a need to generate financial return," which explicitly confirms the pivot to profit maximization.
Comparisons to Anthropic Users questioned how competitor Anthropic handles these governance issues. It was noted that Anthropic operates as a Public Benefit Corporation (PBC). While their corporate charter explicitly mentions "responsibly developing" AI for the "long term benefit of humanity," users pointed out that as a PBC, they are not required to file the publicly accessible Form 990s that non-profits like the OpenAI Foundation must, making their internal shifts harder to track.
The "Persuasion" Risk vs. Extinction A significant portion of the debate moved beyond the mission statement to specific changes in OpenAI’s "Preparedness Framework." Users highlighted that the company reportedly stopped assessing models for "persuasion" and "manipulation" risks prior to release.
- Ad-Tech Scaling: Commenters debated whether this poses a new threat or merely scales existing harms. Some argued that social media and ad-tech have already destroyed "shared reality" and that AI simply accelerates this efficiently (referencing Cambridge Analytica).
- Existential Debate: This triggered a philosophical dispute over whether the real danger of AI is "Sci-Fi extinction" or the subtle, psychological manipulation of the public's perception of reality.
Nature of Intelligence A recurring background argument persisted regarding the nature of LLMs, with some users dismissing current models as mere "pattern completion" incapable of intent, while others argued that widespread psychological manipulation does not require the AI to be sentient—it only requires the user to be susceptible.
Show HN: Skill that lets Claude Code/Codex spin up VMs and GPUs
Submission URL | 128 points | by austinwang115 | 33 comments
Cloudrouter: a CLI “skill” that gives AI coding agents (and humans) on-demand cloud dev boxes and GPUs
What it is
- An open-source CLI that lets Claude Code, Codex, Cursor, or your own agents spin up cloud sandboxes/VMs (including GPUs), run commands, sync files, and even drive a browser—straight from the command line.
- Works as a general-purpose developer tool too; install via npm and use locally.
Why it matters
- Turns AI coding agents from “suggest-only” helpers into tools that can provision compute, execute builds/tests, and collect artifacts autonomously.
- Unifies multiple sandbox providers behind one interface and adds built-in browser automation for end-to-end app workflows.
How it works
- Providers: E2B (default; Docker) and Modal (GPU) today; more (Vercel, Daytona, Morph, etc.) planned.
- Quick start: cloudrouter start . to create a sandbox from your current directory; add --gpu T4/A100/H100 or sizes; open VS Code in browser (cloudrouter code), terminal (pty), or VNC desktop.
- Commands: run one-offs over SSH, upload/download with watch-based resync, list/stop/delete sandboxes.
- Browser automation: Chrome CDP integration to open URLs, snapshot the accessibility tree with stable element refs (e.g., @e1), fill/click, and take screenshots—useful for login flows, scraping, and UI tests.
- GPUs: flags for specific models and multi-GPU (e.g., --gpu H100:2). Suggested use cases span inference (T4/L4) to training large models (A100/H100/H200/B200).
Other notes
- Open source (MIT), written in Go, distributed via npm for macOS/Linux/Windows.
- You authenticate once (cloudrouter login), then can target any supported provider.
- Costs/persistence depend on the underlying provider; today’s GPU support is via Modal.
Feedback and Clarification
- Providers & Configuration: Users asked for better documentation regarding supported providers (currently E2B and Modal). The creators clarified that while E2B/Modal are defaults, they are planning a "bring-your-own-cloud-key" feature and intend to wrap other providers (like Fly.io) in the future.
- Use Case vs. Production: When compared to Infrastructure-as-Code (IaC) tools like Pulumi or deployment platforms like Railway, the creators emphasized that Cloudrouter is designed for ephemeral, throwaway environments used during the coding loop, whereas counterparts are for persistent production infrastructure.
- Local vs. Cloud: Some users argued for local orchestration (e.g., k3s, local agents) to reduce latency and costs. The creators acknowledged this preference but noted that cloud sandboxes offer reliability and pre-configured environments particularly useful for heavy GPU tasks or preventing local resource contention.
Technical Critique & Security
- Monolithic Architecture: User
0xbadcafebeecritiqued the tool for being "monolithic" (bundling VNC, VS Code, Browser, and Server in one Docker template) rather than composable, and raised security concerns about disabling SSH strict host checking. - Creator Response: The creator defended the design, stating that pre-bundling dependencies is necessary to ensuring agents have a working environment immediately without struggling to configure networks. Regarding SSH, they explained that connections are tunneled via WebSockets with ephemeral keys, reducing the risk profile despite the disabled checks.
- Abuse Prevention: In response to concerns about crypto-miners abusing free GPU provision, the creators confirmed that concurrency limits and guardrails are in place.
Why Not Native CLIs?
- When asked why agents wouldn't just use standard AWS/Azure CLIs, the maintainers explained that Cloudrouter abstracts away the friction of setting up security groups, SSH keys, and installing dependencies (like Jupyter or VNC), allowing the agent to focus immediately on coding tasks.
Other
- A bug regarding password prompts on startup was reported and fixed during the discussion.
- The project was compared to dstack, which recently added similar agent support.
Dario Amodei – "We are near the end of the exponential" [video]
Submission URL | 103 points | by danielmorozoff | 220 comments
Dario Amodei: “We are near the end of the exponential” (Dwarkesh Podcast)
Why it matters
- Anthropic CEO Dario Amodei argues we’re just a few years from “a country of geniuses in a data center,” warning that the current phase of rapid AI capability growth is nearing its end and calling for urgency.
Key takeaways
- Scaling still rules: Amodei doubles down on his “Big Blob of Compute” hypothesis—progress comes mostly from scale and a few fundamentals:
- Raw compute; data quantity and quality/breadth; training duration; scalable objectives (pretraining, RL/RLHF); and stable optimization.
- RL era, same story: Even without neat public scaling laws, he says RL is following the same “scale is all you need” dynamic—teaching models new skills with both objective (code/math) and subjective (human feedback) rewards.
- Uneven but inexorable capability growth: Models marched from “smart high schooler” to “smart college grad” and now into early professional/PhD territory; code is notably ahead of the curve.
- Urgency vs complacency: He’s most surprised by how little public recognition there is that we’re “near the end of the exponential,” implying big capability jumps soon and potential tapering thereafter.
- What’s next (topics covered):
- Whether Anthropic should buy far more compute if AGI is near.
- How frontier labs can actually make money.
- If regulation could blunt AI’s benefits.
- How fast AI will diffuse across the economy.
- US–China competition and whether both can field “countries of geniuses” in data centers.
Notable quote
- “All the cleverness… doesn’t matter very much… There are only a few things that matter,” listing scale levers and objectives that “can scale to the moon.”
Here is the summary of the discussion surrounding Dario Amodei's interview.
Discussion Summary The Hacker News discussion focuses heavily on the practical limitations of current models compared to Amodei’s theoretical optimism, as well as the philosophical implications of an approaching "endgame."
- The "Junior Developer" Reality Check: A significant portion of the thread debates Amodei’s claims regarding AI coding capabilities. Users report that while tools like Claude are excellent for building quick demos or "greenfield" projects, they struggle to maintain or extend complex, existing software architectures. The consensus among several developers is that LLMs currently function like "fast but messy junior developers" who require heavy supervision, verification, and rigid scaffolding to be useful in production environments.
- S-Curves vs. Infinite Knowledge: Amodei’s phrase "end of the exponential" sparked a philosophical debate. Some users, referencing David Deutsch’s The Beginning of Infinity, argue that knowledge creation is unbounded and predicting an "end" is a fallacy similar to Fukuyama’s "End of History." Counter-arguments suggest that while knowledge may be infinite, physical constraints (compute efficiency, energy, atomic manufacturing limitations) inevitably force technologies onto an S-curve that eventually flattens.
- The Public Awareness Gap: Commenters discussed the disconnect Amodei highlighted—the contrast between the AI industry's belief that we are 2–4 years away from a radical "country of geniuses" shift and the general public's focus on standard political cycles. Users noted that if Amodei’s 50/50 prediction of an "endgame" within a few years is accurate, the current lack of public preparation or meaningful discourse is startling.
CBP signs Clearview AI deal to use face recognition for 'tactical targeting'
Submission URL | 269 points | by cdrnsf | 157 comments
CBP signs $225k Clearview AI deal, expanding facial recognition into intel workflow
- What’s new: US Customs and Border Protection will pay $225,000 for a year of Clearview AI access, extending the facial-recognition tool to Border Patrol’s intelligence unit and the National Targeting Center.
- How it’ll be used: Clearview’s database claims 60+ billion scraped images. The contract frames use for “tactical targeting” and “strategic counter-network analysis,” suggesting routine intel integration—not just case-by-case lookups.
- Privacy/oversight gaps: The agreement anticipates handling sensitive biometrics but doesn’t specify what images agents can upload, whether US citizens are included, or retention periods. CBP and Clearview didn’t comment.
- Context clash: DHS’s AI inventory links a CBP pilot (Oct 2025) to the Traveler Verification System, which CBP says doesn’t use commercial/public data; the access may instead tie into the Automated Targeting System that connects watchlists, biometrics, and ICE enforcement records.
- Pushback: Sen. Ed Markey proposed banning ICE and CBP from using facial recognition, citing unchecked expansion.
- Accuracy caveats: NIST found face-search works on high-quality “visa-like” photos but error rates often exceed 20% in less controlled images common at borders. In investigative mode, systems always return candidates—yielding guaranteed false matches when the person isn’t in the database.
The Fourth Amendment "Loophole" The central theme of the discussion focuses on the legality and ethics of the government purchasing data it is constitutionally forbidden from collecting itself. Users argue that buying "off-the-shelf" surveillance circumvents the Fourth Amendment (protection against unreasonable search and seizure). Several commenters assert that if the government cannot legally gather data without a warrant, it should be illegal for them to simply purchase that same data from a private broker like Clearview AI.
State Power vs. Corporate Power A debate emerged regarding the distinction between public and private entities.
- Unique State Harms: One user argued that a clear distinction remains necessary because only the government holds the authority to imprison or execute citizens ("send to death row"), implying government usage requires higher standards of restraint.
- The "De Facto" Government: Counter-arguments suggested that the separation is functionally "theatrics." Users contended that tech companies now act as a "parallel power structure" or a de facto government. By relying on private contractors for core intelligence work, the government effectively deputizes corporations that operate outside constitutional constraints.
Legal Precedents and the Third-Party Doctrine The conversation turned to specific legal theories regarding privacy:
- Third-Party Doctrine: Some users questioned whether scraping public social media actually violates the Fourth Amendment, citing the Third-Party Doctrine (the idea that you have no expectation of privacy for information voluntarily shared with others).
- The Carpenter Decision: Others rebutted this by citing Carpenter v. United States, arguing that the Supreme Court is narrowing the Third-Party Doctrine in the digital age and that the "public" nature of data shouldn't grant the government unlimited warrantless access.
Historical Analogies and Solutions One commenter drew an analogy to film photography: legally, a photo lab could not develop a roll of film and hand it to the police without a warrant just because they possessed the physical negatives. They argued digital data should be treated similarly. Proposed solutions ranged from strict GDPR-style data collection laws to technical obfuscation (poisoning data) to render facial recognition ineffective.
IBM Triples Entry Level Job Openings. Finds Limits to AI
Submission URL | 28 points | by WhatsTheBigIdea | 5 comments
IBM says it’s tripling entry‑level hiring, arguing that cutting junior roles for AI is a short‑term fix that risks hollowing out the future talent pipeline. CHRO Nickle LaMoreaux says IBM has rewritten early‑career jobs around “AI fluency”: software engineers will spend less time on routine coding and more on customer work; HR staff will supervise and intervene with chatbots instead of answering every query. While a Korn Ferry report finds 37% of organizations plan to replace early‑career roles with AI, IBM contends growing its junior ranks now will yield more resilient mid‑level talent later. Tension remains: IBM recently announced layoffs, saying combined cuts and hiring will keep U.S. headcount roughly flat. Other firms echo the bet on Gen Z’s AI skills—Dropbox is expanding intern/new‑grad hiring 25%, and Cognizant is adding more school graduates—while LinkedIn cites AI literacy as the fastest‑growing U.S. skill.
Discussion Summary:
Commenters expressed skepticism regarding both the scale of IBM’s hiring and its underlying motives. Users pointed to ongoing age discrimination litigation against the company, suggesting the pivot to junior hiring acts as a cost-saving mechanism to replace higher-paid, senior employees (specifically those over 50). Others scrutinized IBM's career portal, noting that ~240 entry-level listings globally—and roughly 25 in the U.S.—seems negligible for a 250,000-person company, though one user speculated these might be single "generic" listings used to hire for multiple slots. It was also noted that this story had been posted previously.
Driverless trucks can now travel farther distances faster than human drivers
Submission URL | 22 points | by jimt1234 | 16 comments
Aurora’s driverless semis just ran a 1,000-mile Fort Worth–Phoenix haul nonstop in about 15 hours—faster than human-legal limits allow—bolstering the case for autonomous freight economics.
Key points:
- Why it matters: U.S. Hours-of-Service rules cap human driving at 11 hours with mandatory breaks, turning a 1,000-mile trip into a multi-stop run. Aurora says autonomy can nearly halve transit times, appealing to shippers like Uber Freight, Werner, FedEx, Schneider, and early route customer Hirschbach.
- Network today: Driverless operations (some still with an in-cab observer) on Dallas–Houston, Fort Worth–El Paso, El Paso–Phoenix, Fort Worth–Phoenix, and Laredo–Dallas. The company plans Sun Belt expansion across TX, NM, AZ, then NV, OK, AR, LA, KY, MS, AL, NC, SC, GA, FL.
- Scale and safety: 30 trucks in fleet, 10 running driverlessly; >250,000 driverless miles as of Jan 2026 with a “perfect safety record,” per Aurora. >200 trucks targeted by year-end.
- Tech/ops: Fourth major software release broadens capability across diverse terrain and weather and validates night ops. Second-gen hardware is slated to cut costs. Paccar trucks currently carry a safety observer at manufacturer request; International LT trucks without an onboard human are planned for Q2.
- Financials: Revenue began April 2025; $1M in Q4 and $3M for 2025 ($4M adjusted incl. pilots). Net loss was $816M in 2025 as Aurora scales.
CEO Chris Urmson calls it the “dawn of a superhuman future for freight,” predicting 2026 as the inflection year when autonomous trucks become a visible Sun Belt fixture.
Here is a summary of the discussion on Hacker News:
Safety Statistics and Sample Size The most active debate concerned the statistical significance of Aurora's safety claims. While Aurora touted a "perfect safety record" over 250,000 driverless miles, commenters argued that this sample size is far too small to draw meaningful conclusions. Users pointed out that professional truck drivers often average over 1.3 million miles between accidents, meaning Aurora needs significantly more mileage to prove it is safer than a human.
Regulatory Arbitrage Commenters noted that the "efficiency" gains—beating human transit times by hours—are largely due to bypassing human limitations rather than driving speed. Users described this as "regulation arbitrage," as the software does not require the federally mandated rest breaks that cap human drivers to 11 hours of operation.
Hub-to-Hub Model vs. Rail There was consensus that the "hub-to-hub" model (autonomous driving on interstates, human drivers for the complex last mile) is the most viable path for the technology. However, this inevitably triggered a debate about infrastructure, with critics joking that this system is simply an "inefficient railway." Defenders of the trucking approach countered that rail infrastructure in the specific region mentioned (LA/Phoenix) is currently insufficient or non-existent for this type of freight.
Skepticism and Market Optimism Opinions on the company's trajectory were mixed. Some users worried the technology is "smoke and mirrors," citing a lack of detail regarding how the trucks handle complex scenarios like warehouses, docks, and urban navigation. Conversely, others noted that Aurora appears to be delivering on timelines where competitors like Tesla have stalled, pointing to the company's rising stock price (up ~52% in the last year) as a sign of market confidence.
Spotify says its best developers haven't written code since Dec, thanks to AI
Submission URL | 17 points | by samspenc | 18 comments
Spotify says its top devs haven’t written a line of code since December—AI did
- On its Q4 earnings call, Spotify co-CEO Gustav Söderström said the company’s “best developers have not written a single line of code since December,” attributing the shift to internal AI tooling.
- Engineers use an in-house system called Honk, powered by generative AI (Claude Code), to request bug fixes and features via Slack—even from a phone—then receive a built app build to review and merge, speeding deployment “tremendously.”
- Spotify shipped 50+ features/changes in 2025 and recently launched AI-driven Prompted Playlists, Page Match for audiobooks, and About This Song.
- Söderström argued Spotify is building a non-commoditizable data moat around taste and context (e.g., what counts as “workout music” varies by region and preference), improving models with each retraining.
- On AI-generated music, Spotify is letting artists/labels flag how tracks are made in metadata while continuing to police spam.
Why it matters: If accurate at scale, Spotify’s workflow hints at a tipping point for AI-assisted development velocity—and underscores how proprietary, behavior-driven datasets may become the key moat for consumer AI features. (Open questions: code review, testing, and safety gates when deploying from Slack.)
Hacker News Discussion Summary
There is significant skepticism in the comments regarding co-CEO Gustav Söderström's claim, with users contrasting the "efficiency" narrative against their actual experience with the Spotify product.
- App Quality vs. AI Efficiency: The most prevalent sentiment is frustration with the current state of the Spotify desktop app. Commenters complain that the app already consumes excessive RAM and CPU cycles just to stream audio; many argue that if AI is now writing the software, it explains why the app feels bloated or unoptimized (with one user noting the Linux version is currently broken).
- The "Code Review" Reality: Several engineers speculate that "not writing lines of code" doesn't mean the work is finished—it implies developers are now "wading through slop-filled code reviews." Users worry this workflow will lead to technical debt and a collapse of code quality as senior engineers get burned out checking AI-generated commits.
- Safety and Standards: The concept of deploying via Slack triggered alarm bells. Commenters equate this to "testing in production" or bypassing critical thinking protections, suggesting it represents terrible development hygiene rather than a breakthrough.
- Cynicism toward Leadership: Some view the CEO's statement as corporate theater—either a misunderstanding of engineering (confusing "typing" with "building") or a way to game performance reviews. One user invoked Office Space, joking that not writing code for years is usually a sign of slacking off, not hyper-productivity.