Emulating a Pokemon game via a neural network
This post is references an external article or page. Consider it a bookmark, and in no way an endorsement of the article, author, or website. I frequently bookmark content I disagree with.
🔗 via madebyoll.in.
Highlights
I made a playable Pokémon overworld. It looks (mostly) like a normal video game, and you can try it in your web browser
I expect neural networks will subsume games. From the outside in, starting from the final video frames, chewing through G-buffers, replacing layers of “modular and interpretable” code one by one.
Once you’ve successfully used a neural network for a single layer in your nightmarishly-complicated, it-only-works-at-all-because-integration-tests-enforce-the-intended-behavior program, someone is going to ask the obvious question about the adjacent layers.
This layer of code is so complicated. No one seems to understand how to make it faster, or how to make it better, or why it works at all.
We’re feeding the output to a neural network anyway.
Couldn’t the network just… do this part too?
Since neural networks are programs, the answer is always yes! They can mimic anything. As long as you specify the behavior you want, really, really precisely.
This is an astounding demo of a playable Pokemon game emulation powered by a neural network. This is obviously pretty terrible quality, but it’s still surprising just how good it is already. Imagine this in 10 years. The commentary provides a fantastic read.
DeepSeek did not bypass export restrictions; instead, they optimised their chips for maximum memory efficiency, and their well-optimised low-level code was not limited by chip capacity.
A study with nearly 2,000 participants explored how prompts must evolve as generative AI models advance.
Waymo expanded its fully autonomous driving from 3.8 million to 25.3 million miles.
Yucatán, Mexico, partnered with the AI-powered app MeMind in 2022 to prevent suicides, leading to a 9% drop in the suicide rate.
The use of AI monitoring software in a Malawian clinic has reduced stillbirths and neonatal deaths by 82% over three years.
The Commonwealth Bank of Australia’s use of generative AI tools has halved customer losses from scams and reduced fraud by 30%.
Text-generative Transformers include an embedding component that converts text tokens into numerical vectors to capture semantic meaning.
Here’s a Bash script that uses a TDD loop with an LLM to iteratively write Python code to pass tests.
Researchers have developed a portable, wireless EEG acquisition system called BrainGPT capable of converting thoughts into text without the need for an fMRI machine.
Jobs most negatively impacted by AI since ChatGPT’s release: writing jobs (-33%), translation jobs (-19%), and customer service jobs (-16%).
Economic history shows an expansion in the variety of tasks performed by humans, with new jobs continually emerging despite automation.
Meta has developed TestGen-LLM, a tool using LLMs to enhance human-written tests, ensuring test suite improvements by passing certain filters.
Klarna has launched an AI assistant powered by OpenAI, which is active across 23 markets and offers support in over 35 languages.
Business owners are increasingly finding AI like ChatGPT valuable for tasks beyond basic applications, such as data visualisation and financial reporting, saving time on manual data processing.
The paper compares Large Language Models (LLMs) with Junior Lawyers and Legal Process Outsourcers (LPOs) in terms of contract review performance.
Researchers have developed a system called Mobile ALOHA for mobile manipulation tasks that require both hands and the whole body, aiming to improve robotic mobility and dexterity beyond simple table-top manipulation.
Clinicians developed a deep learning model that predicts reported sex from retinal fundus photographs without needing to code.
Apple has launched a new machine learning framework specifically for Apple Silicon.
These researchers suggest each occupation has an inflection point after which AI improvements harm human workers’ prospects.
Here’s a GPT trained on knowledge from 17th-century texts. So, it answers in historical style, including outdated scientific concepts.
The author has been at OpenAI for a year and observed that generative models closely approximate their training datasets.
This team taught GPT to navigate iOS and Android by sending it screenshots and giving it instructions.
President Biden released an executive order on AI development, drawing parallels to the early fears and regulatory considerations during the dawn of the microprocessor and internet, highlighting that past technological advancements were less hindered by government intervention.