AI Submissions for Mon May 04 2026
1966 Ford Mustang Converted into a Tesla with Working 'Full Self-Driving'
Submission URL | 192 points | by Brajeshwar | 157 comments
1966 Mustang gets Model 3 guts—and working FSD (Supervised) for ~$40k
- A Sacramento Tesla recycler (Calimotive) spent two years and about $40,000 converting a 1966 Ford Mustang to a dual‑motor Model 3 setup, including the 15" touchscreen, OTA updates, Tesla seats, Cybertruck yoke, charge port in the old gas-cap spot, and features like Autopilot, Sentry, Summon, and “Full Self-Driving” (Supervised).
- Chassis hack: They grafted three sections of a 2024 Model 3 floor and seats into the Mustang, shortening the battery case to fit without changing exterior dimensions. Estimated ~400 hp, 471 lb‑ft, 0–60 mph ~3.5s.
- FSD on a non‑Tesla: Cameras were retrofitted to the classic body, and the system reportedly works—likely the first non‑Tesla running FSD. That’s notable given Tesla’s networks are trained on tightly defined camera positions.
- Efficiency claim: Reported 258 Wh/mi and ~194 miles at ~80% SOC in a test drive—roughly Model 3 territory despite worse aero. Some owners in the comments dispute the comparison, citing lower personal averages in a Model Y and expecting ~210 Wh/mi in a Model 3.
- Bigger picture: Suggests Tesla’s hardware/software stack is more portable than licensing progress implies. Elon Musk has talked about licensing FSD to other automakers; none have signed on. This DIY build underscores feasibility even as commercial deals stall.
- Market context: EV conversions are booming (est. $5.9B in 2024, ~9% CAGR to 2034). With companies charging $75k+ for Tesla-based classics, this ~$40k DIY looks like a bargain.
Why HN cares: Real-world evidence that Tesla’s vision stack can tolerate nonstandard camera placements hints at broader adaptability—and raises questions about access, support, liability, and whether third parties will push FSD into places Tesla and OEMs won’t.
Hacker News Daily Digest: The “Teslafied” ’66 Mustang and the FSD Debate
Today’s most fascinating hardware hack involves a Sacramento recycler dropping a 1966 Ford Mustang body onto a 2024 Tesla Model 3 chassis—resulting in a 400-hp EV with a Cybertruck yoke, the Model 3 touchscreen, and most notably, working Full Self-Driving (FSD).
Here is what the Hacker News community had to say about the build, the tech, and the broader implications of FSD on a non-standard vehicle:
1. The "Ship of Theseus" Build Initial reactions clarified the nature of the project. While the headline might suggest some deep software hackery to get Tesla's brain to control a classic internal combustion engine (ICE), commenters were quick to point out that this is essentially a "Mustang body kit mapped onto a Model 3 chassis." Still, grafting three sections of a Model 3 floor to fit a vintage shell without changing its exterior dimensions was widely praised as an incredibly cool engineering feat.
2. A Technical Marvel for Sensor Calibration For the software engineers and self-driving industry vets in the thread, the biggest shock wasn't the physical swap, but that FSD actually works on a new vehicle body.
- The Industry Standard: Commenters working in autonomous vehicle tech pointed out that traditional sensor arrays (cameras, LiDAR, radar) are incredibly rigid. Moving a sensor even 10 millimeters usually breaks the system, requiring complex recalibration to fix the sensor fusion.
- Tesla’s Vision Advantage: The fact that Tesla's FSD adapted to the Mustang's non-standard camera placements validates Tesla’s "vision-only" approach. Instead of relying on perfectly mounted lenses from the factory, Tesla built robust software-based self-calibration (which calibrates via a 10-minute drive by observing road markers and targets) to save manufacturing costs. This DIY project inadvertently proved just how portable and adaptable that software stack has become.
3. The Inevitable "FSD" Naming Debate As is tradition on Hacker News, the mention of Tesla's autonomous software sparked a fierce semantic and philosophical debate:
- The Skeptics: Many users dragged Tesla for the "Full" in Full Self-Driving, calling it intellectually dishonest gaslighting. Critics pointed to Elon Musk’s long list of missed Level 4 autonomy deadlines dating back to 2016, arguing that until Tesla assumes legal liability for the driving (Level 4), it is merely an Advanced Driver Assistance System (ADAS), akin to Ford's BlueCruise or GM's SuperCruise. Some joked it should be renamed "Featureless Sometimes Driving."
- The Defenders & Users: Conversely, active users chimed in to defend the current state of the software. Several commenters claimed they run FSD for 96% to 100% of their daily errands, praising recent updates (like v13). Furthermore, they noted that modern refreshed models use interior eye-tracking cameras to ensure driver attention, removing the need to constantly "nag" the steering wheel, making the experience feel highly autonomous.
The Takeaway While the Hacker News community remains deeply divided on Tesla’s marketing ethics and the true definition of "Self-Driving," everyone largely agreed on one thing: a $40k DIY project proving that Tesla's vision stack can dynamically adapt to an entirely different vehicle body is a massive flex for their software engineering. It opens the door to a fascinating future of high-tech restomods and third-party EV conversions.
Let's talk about LLMs
Submission URL | 174 points | by cdrnsf | 165 comments
The author stakes out a pragmatic lane on LLMs, insisting on precise terminology (LLMs, not the mushy “AI”) and focusing strictly on programming. Framing today’s hype through Fred Brooks’ “No Silver Bullet,” they argue LLMs mostly chip away at accidental complexity (syntax, boilerplate, API wrangling) while leaving the essential complexity of software—specification, design, and validating the conceptual model—largely intact. If the essential work is well over 10% of the job (it is), wiping out the rest can’t deliver a 10x leap by itself.
Highlights
- Terminology matters: the debate is really about large language models, not “AI” in the abstract.
- Useful lens: Brooks’ essence vs. accidents—LLMs help with representation, not with nailing down what to build and why.
- Expect real gains, not miracles: big boosts on scaffolding, translation, and routine code; limited impact on deep design, correctness, and system behavior.
- Diminishing returns: unless accidental work is >90% of effort, eliminating it can’t produce an order-of-magnitude improvement.
- Cultural note: calls out the “LLM + Gell-Mann amnesia” vibe—people predict LLMs will transform every field but their own.
Why it matters
- Sets sober expectations: LLMs are powerful power tools, not a replacement for human judgment about specs, architecture, and testing.
- Guides adoption: lean on LLMs for boilerplate and exploration; don’t outsource the hard thinking that defines successful software.
Key quote (Brooks): “I believe the hard part of building software to be the specification, design, and testing of this conceptual construct… We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.”
Hacker News Daily Digest: The LLM Reality Check
Welcome to today’s top story on Hacker News. Today, the community is deeply engaged in a philosophical and practical debate about the actual utility of AI in software development, sparked by a pragmatic essay titled: Let’s talk about LLMs: No Silver Bullet for Software.
Drawing on Fred Brooks' legendary "No Silver Bullet" framework, the submission argues that LLMs primarily solve "accidental complexity" (syntax, API wrangling) but cannot solve the "essential complexity" of software (specification, architecture, and conceptual design).
Here is a summary of the intense discussion happening in the comments.
1. The "Paradigm Shift" Debate: Table Saw or Revolution?
The central fault line in the comments is whether LLMs represent a fundamental paradigm shift or just a superior tool.
- The Tool Camp: Users like grdsj compare the advent of LLMs to the transition from the slide rule to the calculator—a massive convenience, but not a change in the fundamental physics of engineering. To them, LLMs are "pretty tools" or a "metaphorical table saw," not Earth-shattering magic.
- The Revolution Camp: Conversely, highly bullish users (mfr, spnmrtn) argue this is absolutely a paradigm shift. They envision engineers offloading tasks to "agentic workflows" that research, test, and write code 10x faster. They push back hard against skeptics, arguing that dismissing this "hockey-stick trend" is a form of "AI blindness," even as critics point out that current AI (like Siri) still frequently fails at basic tasks.
2. Scope, Skill Atrophy, and "Sycophantic" AI
When it comes to daily practice, most developers agree that LLMs shine in tightly limited scopes.
- Where it works: Commenters note that AI is fantastic for semi-manual editing, semantic transformations, and scaffolding (especially when integrated tightly into the IDE, like with Cursor).
- Where it fails: Users like mchlchsr point out that using LLMs for deep architectural planning or large-scale codebase changes is highly inefficient. Furthermore, prmph warns that LLMs often act as "sycophantic confirmation machines." Because they tend to agree with the user's prompts, they can comfortably validate bad architectural decisions—allowing developers to go "faster in the wrong direction." Several users also voiced concerns about junior developer "skill atrophy" if foundational coding is heavily abstracted.
3. Democratization vs. The Reality of Hard Constraints
An interesting philosophical sub-thread emerged regarding gatekeeping.
- Gatekeeping: User tptck argued that leaning on Fred Brooks to highlight LLM limitations sounds suspiciously like "guild bylaws"—professional developers trying to gatekeep laypeople from using AI to solve practical problems.
- The Reality Check: Others (sv, yts, kdd) strongly pushed back, framing it as a matter of software stability and liability rather than elitism. They point out a glaring contradiction in AI marketing: selling LLMs as "so easy anyone can use them" while simultaneously claiming "you must use them or fall behind." They note that professional software is bound by hard constraints—SOC2 compliance, HIPAA regulations, and strict cybersecurity standards. Allowing "vibes-based" AI-generated code into production without rigorous human architectural oversight is not democratizing engineering; it is an invitation for massive data breaches and system failures.
The Takeaway: The HN community largely agrees with the original author's premise. LLMs are incredibly powerful accelerators for the tedious parts of programming. However, they cannot absorb the liability, system design, or domain expertise required to build secure, robust software. They are a revolutionary tool, but the human must remain firmly in the driver's seat.
Transformers Are Inherently Succinct (2025)
Submission URL | 59 points | by bearseascape | 9 comments
Transformers are Inherently Succinct (arXiv:2510.19315)
- What’s new: The authors introduce “succinctness” as a metric for expressive power and prove that transformers can describe formal languages far more compactly than classic representations like finite automata and LTL formulas.
- Why it matters: A small transformer can encode behaviors that would require huge automata or long logical formulas—a double-edged sword that explains their practical power while complicating formal understanding.
- Big consequence: Verifying properties of transformers is EXPSPACE-complete—decidable but astronomically intractable in general—placing hard limits on scalable, exact formal verification.
- Takeaway: Transformers offer extreme compression of rules/patterns, which helps efficiency and capability, but makes them harder to analyze, interpret, and certify. Expect more work on restricted architectures, over-approximations, and modular specs to regain tractability.
Paper: Pascal Bergsträßer, Ryan Cotterell, Anthony W. Lin. DOI: 10.48550/arXiv.2510.19315
Here is a daily digest summary of the Hacker News discussion regarding the paper "Transformers are Inherently Succinct."
Hacker News Discussion: Transformers and the Meaning of "Succinctness"
The conversation in the comments reveals a split between deep theoretical critiques of the paper's mathematical baselines and a broader semantic confusion over how computer scientists use the word "succinctness."
1. A Technical Critique: Does the "Exponential Advantage" Actually Hold? One highly technical commenter pushed back against the paper's core premise, questioning the benchmarks used for comparison. They argue that comparing transformers to un-reduced Linear Temporal Logic (LTL) expressions might be giving transformers an unfair advantage.
- The Counter-Argument: If the researchers had compared transformers to more optimized logical representations—such as Reduced Ordered Binary Decision Diagrams (ROBDDs) or LTL with parameterizing sub-formulas—the transformer's "exponential advantage" might entirely vanish.
- Theory vs. Practice: The same user pointed out a structural disconnect in the paper: it proves things based on theoretically constructed transformers. In the real world, transformers are trained (e.g., via truth tables), which is a fundamentally different and messy process that theoretical construction doesn't account for.
2. Lost in Translation: Math vs. Linguistics A large portion of the thread was dedicated to clearing up a fundamental misunderstanding of the paper's title.
- Several commenters initially assumed "succinctness" referred to the linguistic abilities of Large Language Models—specifically, that larger models have a better vocabulary and can summarize concepts using brief, metaphor-rich, or expressive text.
- Another user had to gently course-correct the thread, clarifying that the paper is strictly about Theoretical Computer Science. In this context, succinctness is diametrically opposed to human linguistics; it refers purely to how small a mathematical model can be while still representing complex, artificial logic rules. The original commenter gracefully acknowledged the correction.
3. Tangent: Strict Coding Language vs. "Flowery" Real Language Spurred by the linguistic misunderstanding, a side debate emerged about how language should be used by AI. One user suggested that to improve AI reasoning, we might need models to output highly rigid, standardized language (referencing IETF RFC 2119, which dictates the strict use of words like "MUST," "SHOULD," and "MAY"). Another user countered this, arguing that nuanced, "flowery" language is actually a powerful tool, and evaluating text with strict, simple heuristics is flawed because language must naturally adapt to context and the intended audience.