#ai
Here’s a GPT trained on knowledge from 17th-century texts. So, it answers in historical style, including outdated scientific concepts.
The author has been at OpenAI for a year and observed that generative models closely approximate their training datasets.
Recently, the Biden administration announced its framework to manage the deployment of AI by executive order. The order throws a bone to both sides of the AI argument: in some regards, the administration is embracing AI; in others, it’s hampering it. Overall, I’m disappointed.
This team taught GPT to navigate iOS and Android by sending it screenshots and giving it instructions.
President Biden released an executive order on AI development, drawing parallels to the early fears and regulatory considerations during the dawn of the microprocessor and internet, highlighting that past technological advancements were less hindered by government intervention.
I don’t want to single out jrincayc because these arguments are common from those concerned about AGI, and the author does acknowledge many arguments against their recommended approach. Sharing nonetheless, because this post clearly demonstrates several of the problems with the anti-AI movement within tech.
Researchers from UC Berkeley, Shanghai, and Osaka University have trained a computer to analyse brain activity while listening to music and recreate the song, including a recognisable version of Another Brick in the Wall.
Some analysts predict AI could enable a 30% annual growth rate in the US economy, but Tyler Cowen argues for more moderate estimates, expecting a boost of ¼ to ½ of a percentage point.
Large organisations often function with a single directive mind, usually a CEO or equivalent figure, supported by middle managers.
Researchers have developed a deep learning model, with 95% accuracy, that can extract data from keyboard keystrokes recorded by a microphone and 93% when recorded on Zoom.
The study presents an AI model that can predict how viral variants affect protein–protein binding.
Large Language Models (LLMs) like ChatGPT work by predicting the next word in a sequence, using word vectors or lists of numbers to represent words; similar words have similar vectors, making them closer in an imagined “word space.”
Noy and Zhang conducted an experiment with college-educated professionals using ChatGPT for writing tasks.
Scientists analysed accelerometry data from those who later developed Parkinson’s. Machine learning models trained on this data were highly accurate in predicting the disease.
OpenAI has introduced Code Interpreter, which enhances ChatGPT’s abilities in mathematics and language tasks by leveraging Python code and a large memory capacity, reducing errors and improving accuracy.
Timothy B. Lee explores how AI could impact employment by focusing on professional translators.
Digital avatars are already a very real feature of the media in Indonesia.
Andreesson on the AI moral panic. While I wouldn’t endorse every point with absolute confidence, it’s a solid read. Most conversations on AI risk ignore the opportunity cost of not advancing AI further. And I don’t see any evidence that a dangerous AGI is imminent.
Cameron surveys small businesses in Australia. This newsletter covers ChatGPT — apparently a surprising number of small businesses already use it in their day to day operations.
Well, text-based user interfaces are back in vogue thanks to ChatGPT, and to many users and builders, this is disappointing. Why would we want to throw away our long history of graphical user interfaces for inferior, difficult-to-use, text-based interfaces?
If modern AI is equivalent to electricity, the internet, or the printing press, is it a good idea for most of a continent to strangle it in the cot?
Kudos to Grimes for facing the new reality of AI head on, looking past the inevitable challenges to find opportunities and benefits for artists.
The takeaway from this piece is: just because someone is an expert in a technology, does not mean they are an expert in how it will impact economies and cultures.
Meta recently open sourced (to researchers) their LLM, which has led to some amazing on-device AI demonstrations. I’m also fascinated by local inference, because it could prove incredibly cheap (compared to centralised server-side inference) and could exacerbate Apple’s chip advantage, given they have the most capable device chips for AI.
Early CGI was mind blowing until it was terrible and unwatchable. This loop is playing out with AI. A year ago, I was losing my mind over DALL-E every single day. Today, its output looks like absolute trash compared to Midjourney.