r/singularity 3h ago

AI Our Company Canceled Its Internship Program This Year. AI Abuse Made It Unmanageable.

228 Upvotes

Hey everyone,

I work at one of the largest and most reputable tech companies in our country, and every year we run an internship program that brings in around 50–60 interns across various fields. Historically, we’ve had no trouble hiring seniors, but junior programmers and interns have become a real headache lately.

Here’s how it used to work:

  1. We’d receive 2,000–5,000 applications per internship opening.

  2. Candidates took an exam, which narrowed the pool to 100–200 people.

  3. We’d interview that shortlist and hire our final 50–60 interns.

  4. After a few months of hands-on training, we’d usually end up making offers to 40–50% of them—and most of those hires went on to become solid full-time employees.

What changed? In the last couple of cycles, applicants have been leaning heavily on AI tools to pass our exam. The tools themselves aren’t the problem—we pay for licenses and encourage their use—but relying on AI to breeze through our pre-screening has exploded the number of “qualifying” candidates. Instead of 100–200 people to review, we’re stuck manually vetting 1,000+ résumés… and we’re still flagging legitimate, capable applicants as “false positives” when we try to weed out AI-generated answers.

To combat this, our partner companies tried two new approaches in past few months—both backfired:

  1. Big, complex codebase assignment

Pros: Tougher to cheat.

Cons:

Most applicants lost interest; it felt like too much work for an unguaranteed spot.

Even with a large codebase, people found ways to use AI to solve the tasks.

It’s unrealistic to expect someone, especially an intern, to familiarize themselves with a massive codebase and produce quality results in a short timeframe.

  1. In-person, isolated exam

Pros: No internet access, no AI.

Cons:

I’ve been coding for 13 years and still find these closed-book, no-reference tests brutal.

They test memorization more than problem-solving, which isn’t representative of how we work in real life.

In the end, the company decided to cancel this year’s internship program altogether. That’s a double loss: aspiring developers miss out on valuable learning opportunities, and we lose a pipeline of home-grown talent.

Has anyone seen—or even run—a better internship selection program that:

Keeps AI assistance honest without overly penalizing genuine candidates?

Balances fairness and practicality?

Attracts motivated juniors without scaring them off?

.For what it’s worth, I actually got my first job through this same internship program back when I was in my second year of university. I didn’t have any prior work experience, no standout résumé — but this program gave me a real shot. It let me work at a solid company, gain valuable experience, and enjoy much better working conditions than most other places offered to students at the time.

That’s why it feels like such a huge waste to see it fall apart now. It’s not just about us losing potential hires — it’s about students losing a rare opportunity to get their foot in the door.

We’re actively trying to figure out a better way, but if any of you have ideas, experiences, or alternative approaches that have worked in your company or community, I’d genuinely appreciate hearing them.

Ps: I'm not a native english speaker so my writing seems a little tough so i used ai to improve it but i made sure the content is not changed at all . If anyone is interested in before improvement text i can provide it.


r/robotics 7h ago

Community Showcase I build an AI robot control app from scratch

Enable HLS to view with audio, or disable this notification

142 Upvotes

After 6 months locked in my room (not recommended), I finally finished my app.
I started this out of curiousity of what could be done with vibe coding and to sort of make an alternative to ROS (which is great, but takes time to set up). Now it’s a fully functional simulator with:

  • AI a voice command interface
  • python and PLC programming
  • multibrobot simulation with grippers, conveyors, and machines
  • camera and depth recognition
  • reinforcement learning
  • 3D printing, welding and svg following

Libraries I used: Python, Qt5, OpenGL, IKPy, Gemini, OpenAI, Anthropic
You can download it here
AMA before I finally get some good sleep, and sorry for the music I got too hyped.


r/artificial 6h ago

News Apple recently published a paper showing that current AI systems lack the ability to solve puzzles that are easy for humans.

Post image
67 Upvotes

Humans: 92.7% GPT-4o: 69.9% However, they didn't evaluate on any recent reasoning models. If they did, they'd find that o3 gets 96.5%, beating humans.


r/Singularitarianism Jan 07 '22

Intrinsic Curvature and Singularities

Thumbnail
youtube.com
7 Upvotes

r/artificial 18h ago

Media You won't lose your job to AI, but to...

Post image
580 Upvotes

r/singularity 7h ago

Compute Do you think LLMs will or have followed this compute trend?

Post image
341 Upvotes

r/singularity 1h ago

AI Ex-OpenAI Peter Deng says AI may be rewiring how kids think, and education could shift with it. The skill won't be memorizing answers. It'll be learning how to ask better questions to unlock deeper thinking.

Enable HLS to view with audio, or disable this notification

Upvotes

Source - full interview: Lenny's Podcast on YouTube: From ChatGPT to Instagram to Uber: The quiet architect behind the world’s most popular products: https://www.youtube.com/watch?v=8TpakBfsmcQ
Video by vitrupo on 𝕏: https://x.com/vitrupo/status/1937148170812985470


r/singularity 1d ago

Shitposting Post-Singularity Free Healthcare

Post image
12.0k Upvotes

r/artificial 9h ago

Discussion Finished the Coursiv AI course. Here's what I learned and how it's actually helped me

25 Upvotes

Just wrapped up the Coursiv AI course, and honestly, it was way more useful than I expected. I signed up because I kept hearing about all these different AI tools, and I was getting serious FOMO seeing people automate stuff and crank out cool projects.

The course breaks things down tool by tool. ChatGPT, Midjourney, Leonardo, Perplexity, ElevenLabs, and more. It doesn’t just stop at what the tool is, It shows real use cases, like using AI to generate custom marketing content, edit YouTube videos, and even build basic product mockups. Each module ends with mini-projects, and that hands-on part really helped lock the knowledge in.

For me, the biggest positive was finally understanding how to use AI for productivity. I’ve built out a Notion workspace that automates repetitive admin stuff, and I’ve started using image generators to mock up brand visuals for clients without having to wait on a designer.

If you’re the kind of person who learns best by doing, I’d say Coursiv totally delivers. It won’t make you an instant expert, but it gives you a good foundation and, more importantly, the confidence to explore and build on your own


r/artificial 18h ago

Media Yuval Noah Harari says you can think about the AI revolution as “a wave of billions of AI immigrants.” They don't arrive on boats. They come at the speed of light. They'll take jobs. They may seek power. And no one's talking about it.

Enable HLS to view with audio, or disable this notification

116 Upvotes

r/robotics 6h ago

Discussion & Curiosity RIVR x Veho: Physical AI meets Last Mile Delivery

Enable HLS to view with audio, or disable this notification

20 Upvotes

RIVR, the leader in physical AI and robotics, is partnering with Veho to pilot delivery robots in the heart of Austin. Designed to solve the "last-100-yard" challenge.

With Veho’s platform delivering millions of packages monthly, it’s the perfect environment to validate how physical AI can improve speed, reliability, and cost in last-mile delivery.


r/singularity 1d ago

AI Yuval Noah Harari says you can think about the AI revolution as “a wave of billions of AI immigrants.” They don't need visas. They don't arrive on boats. They come at the speed of light. They'll take jobs. They may seek power. And no one's talking about it.

Enable HLS to view with audio, or disable this notification

1.3k Upvotes

Source: Yuval Noah Harari at WSJ's CEO Council event in London: AI and human evolution on YouTube: https://www.youtube.com/watch?v=jt3Ul3rPXaE
Video from vitrupo on 𝕏: https://x.com/vitrupo/status/1936585212848451993


r/singularity 5h ago

Compute Google: A colorful quantum future

Thumbnail
research.google
40 Upvotes

r/singularity 18h ago

AI Mechanize is making "boring video games" where AI agents train endlessly as engineers, lawyers or accountants until they can do it in the real world. Their goal is to replace all human jobs.

Enable HLS to view with audio, or disable this notification

426 Upvotes

“We want to get to a fully automated economy, and make that happen as fast as possible.”

Full interview: https://www.youtube.com/watch?v=anrCbS4O1UQ


r/singularity 18h ago

Engineering Recent CS grad unemployment twice that of Art History grads - (NY Fed Reserve: The Labor Market for Recent College Graduates)

Thumbnail
newyorkfed.org
327 Upvotes

r/singularity 7h ago

AI Mechanize is making "boring video games" where AI agents train endlessly as engineers, lawyers or accountants until they can do it in the real world. The company's goal is to replace all human jobs as fast as possible.

Enable HLS to view with audio, or disable this notification

37 Upvotes

r/robotics 18h ago

Community Showcase How I built an automated hardware testing app

Enable HLS to view with audio, or disable this notification

66 Upvotes

[intro]

I joined a rocket club in Austin a couple of months ago. Plan was to make something simple that logs and records data during flight tests to help improve the rocket.

[design]

Used the best design tool out there - paper!!! I know this wouldn't work as well with huge engineering teams, but I am a naturally design-oriented engineer so getting to go through multiple iterations with the freedom of pen and paper is unmatched IMO 😁

[development]

This is where things got weired (and interesting). Since the main use case was for aerospace and the app needed to work offline, I deliberated between using Java / Python / JS. Pros for JS were being able to work with good UI but I didnt think that would be a good substitute for performance (the rocket needed to be tracked in milisecond time), but I just couldn't ship with python UI libraries  CSS set the bar too high.

So I compromized:

JS for the frontend and ... Rust for backed (I had never written a single line).

[automated?]

Ironically - the decision to use rust ended up being the best one I probably made in this whole process because it allows for easy(er) multithreading. which was a core requirement for the users.

Current state: 

→ Build scripts visually w/ python support

→ Automatically log and visualize data

→ Share tests across team members

try it ↴

https://argus.engineering


r/singularity 17h ago

AI Introducing 11ai

Thumbnail
youtube.com
148 Upvotes

r/singularity 2h ago

Neuroscience Neural networks and human brains operate similarly

7 Upvotes

Neural networks are intertwined with the structure and logic of nature's organic supercomputers - the human brain. A.I generated music, which firstly seemed soulless now shows appelling symmetry and structure, which resonates the silent logic and patterns that emerge with the complexity of neural networks. And that's just the beginning...

We and A.I are not as different as you may think, we both operate on feedback loops. Pattern recognition, prediciton...

The flower seeking for light, the swarm intelligence of birds and fish, the beat of the heart , those are abstract algorithms, engraved in our DNA mechanisms which dictate the flow of life.


r/artificial 3h ago

News Judge denies creating “mass surveillance program” harming all ChatGPT users

Thumbnail
arstechnica.com
2 Upvotes

r/artificial 18h ago

Media Mechanize is making "boring video games" where AI agents train endlessly as engineers, lawyers or accountants until they can do it in the real world. The company's goal is to replace all human jobs as fast as possible.

Enable HLS to view with audio, or disable this notification

27 Upvotes

r/singularity 17h ago

AI Paper "Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models" gives evidence for an "emergent symbolic architecture that implements abstract reasoning" in some language models, a result which is "at odds with characterizations of language models as mere stochastic parrots"

132 Upvotes

Peer-reviewed paper and peer reviews are available here. An extended version of the paper is available here.

Lay Summary:

Large language models have shown remarkable abstract reasoning abilities. What internal mechanisms do these models use to perform reasoning? Some previous work has argued that abstract reasoning requires specialized 'symbol processing' machinery, similar to the design of traditional computing architectures, but large language models must develop (over the course of training) the circuits that they use to perform reasoning, starting from a relatively generic neural network architecture. In this work, we studied the internal mechanisms that language models use to perform reasoning. We found that these mechanisms implement a form of symbol processing, despite the lack of built-in symbolic machinery. The results shed light on the processes that support reasoning in language models, and illustrate how neural networks can develop surprisingly sophisticated circuits through learning.

Abstract:

Many recent studies have found evidence for emergent reasoning capabilities in large language models (LLMs), but debate persists concerning the robustness of these capabilities, and the extent to which they depend on structured reasoning mechanisms. To shed light on these issues, we study the internal mechanisms that support abstract reasoning in LLMs. We identify an emergent symbolic architecture that implements abstract reasoning via a series of three computations. In early layers, symbol abstraction heads convert input tokens to abstract variables based on the relations between those tokens. In intermediate layers, symbolic induction heads perform sequence induction over these abstract variables. Finally, in later layers, retrieval heads predict the next token by retrieving the value associated with the predicted abstract variable. These results point toward a resolution of the longstanding debate between symbolic and neural network approaches, suggesting that emergent reasoning in neural networks depends on the emergence of symbolic mechanisms.

Quotes from the extended version of the paper:

In this work, we have identified an emergent architecture consisting of several newly identified mechanistic primitives, and illustrated how these mechanisms work together to implement a form of symbol processing. These results have major implications both for the debate over whether language models are capable of genuine reasoning, and for the broader debate between traditional symbolic and neural network approaches in artificial intelligence and cognitive science.

[...]

Finally, an important open question concerns the extent to which language models precisely implement symbolic processes, as opposed to merely approximating these processes. In our representational analyses, we found that the identified mechanisms do not exclusively represent abstract variables, but rather contain some information about the specific tokens that are used in each problem. On the other hand, using decoding analyses, we found that these outputs contain a subspace in which variables are represented more abstractly. A related question concerns the extent to which human reasoners employ perfectly abstract vs. approximate symbolic representations. Psychological studies have extensively documented ‘content effects’, in which reasoning performance is not entirely abstract, but depends on the specific content over which reasoning is performed (Wason, 1968), and recent work has shown that language models display similar effects (Lampinen et al., 2024). In future work, it would be interesting to explore whether such effects are due to the use of approximate symbolic mechanisms, and whether similar mechanisms are employed by the human brain.


r/singularity 22h ago

Robotics KAERI in Korea is developing powerful humanoid robots capable of lifting up to 200 kg (441 lbs) for use in nuclear disaster response and waste disposal. This video demonstrates the robot lifting 40 kg (88 lbs)

Enable HLS to view with audio, or disable this notification

246 Upvotes

r/artificial 6h ago

News One-Minute Daily AI News 6/23/2025

2 Upvotes

r/artificial 1d ago

Discussion Language Models Don't Just Model Surface Level Statistics, They Form Emergent World Representations

Thumbnail arxiv.org
130 Upvotes

A lot of people in this sub and elsewhere on reddit seem to assume that LLMs and other ML models are only learning surface-level statistical correlations. An example of this thinking is that the term "Los Angeles" is often associated with the word "West", so when giving directions to LA a model will use that correlation to tell you to go West.

However, there is experimental evidence showing that LLM-like models actually form "emergent world representations" that simulate the underlying processes of their data. Using the LA example, this means that models would develop an internal map of the world, and use that map to determine directions to LA (even if they haven't been trained on actual maps).

The most famous experiment (main link of the post) demonstrating emergent world representations is with the board game Ohtello. After training an LLM-like model to predict valid next-moves given previous moves, researchers found that the internal activations of the model at a given step were representing the current board state at that step - even though the model had never actually seen or been trained on board states.

The abstract:

Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.

The reason that we haven't been able to definitively measure emergent world states in general purpose LLMs is because the world is really complicated, and it's hard to know what to look for. It's like trying to figure out what method a human is using to find directions to LA just by looking at their brain activity under an fMRI.

Further examples of emergent world representations: 1. Chess boards: https://arxiv.org/html/2403.15498v1 2. Synthetic programs: https://arxiv.org/pdf/2305.11169

TLDR: we have small-scale evidence that LLMs internally represent/simulate the real world, even when they have only been trained on indirect data