#ai
Large Language Models (LLMs) like ChatGPT work by predicting the next word in a sequence, using word vectors or lists of numbers to represent words; similar words have similar vectors, making them closer in an imagined “word space.”
Noy and Zhang conducted an experiment with college-educated professionals using ChatGPT for writing tasks.
Scientists analysed accelerometry data from those who later developed Parkinson’s. Machine learning models trained on this data were highly accurate in predicting the disease.
OpenAI has introduced Code Interpreter, which enhances ChatGPT’s abilities in mathematics and language tasks by leveraging Python code and a large memory capacity, reducing errors and improving accuracy.
Timothy B. Lee explores how AI could impact employment by focusing on professional translators.
Digital avatars are already a very real feature of the media in Indonesia.
Andreesson on the AI moral panic. While I wouldn’t endorse every point with absolute confidence, it’s a solid read. Most conversations on AI risk ignore the opportunity cost of not advancing AI further. And I don’t see any evidence that a dangerous AGI is imminent.
Cameron surveys small businesses in Australia. This newsletter covers ChatGPT — apparently a surprising number of small businesses already use it in their day to day operations.
If modern AI is equivalent to electricity, the internet, or the printing press, is it a good idea for most of a continent to strangle it in the cot?
Kudos to Grimes for facing the new reality of AI head on, looking past the inevitable challenges to find opportunities and benefits for artists.
The takeaway from this piece is: just because someone is an expert in a technology, does not mean they are an expert in how it will impact economies and cultures.
Meta recently open sourced (to researchers) their LLM, which has led to some amazing on-device AI demonstrations. I’m also fascinated by local inference, because it could prove incredibly cheap (compared to centralised server-side inference) and could exacerbate Apple’s chip advantage, given they have the most capable device chips for AI.
Early CGI was mind blowing until it was terrible and unwatchable. This loop is playing out with AI. A year ago, I was losing my mind over DALL-E every single day. Today, its output looks like absolute trash compared to Midjourney.
A new system was able to capture exact words and phrases from the brain activity of someone listening to podcasts. This breakthrough is incredibly important.
Differentiation is required to build a great startup. This means you need to do things differently. But you can’t reinvent everything as you go: founders need to recognise where it makes sense to be contrarian, and where it makes sense to adopt common practices. This week, we explore the value of contrarian approaches in startup building, and the way the introduction of AI copilots impact this principle
Important quote from George Box. I came across it in the context of AI, but it applies to all science. We’re just building models for understanding nature, which is probably too complex to truly understand, but some of our models are useful nonetheless.
In my opinion, the credibility of AI-risk researchers has absolutely plummeted in the past few months. What has become clear is that this field is primarily driven by hypothetical posturing rather than any kind of real research and experimentation.
For each of the 14 weeks, more people searched for ChatGPT than Taylor Swift according to Google Trends data. This holds true outside of tech hubs — Minnesotans & Idahoans & Vermontans are searching for ChatGPT.
Some software engineers are worried about AI taking their jobs. Some SaaS founders are excited for cheaper R&D costs, while others are fearful of the new market entrants this will empower. Today, we explore the potential impact of developer copilots, no-code, and app-generating LLMs.
Today, seventeen countries have a GDP greater than that of the British Empire at it’s peak, including many former colonies and some nations with far fewer people than the Empire had (like Australia).
An opinion piece for Time by Eliezer Yudkowsky, a prominent AI researcher.
This is a fantastic book that I highly recommend to anyone wanting to dip their toes into the meaning of AI, consciousness, and the history of ideas on these topics.
The best way to make great things is to break the creative process into steps, and optimise each step.
The author claims OpenAI has intentionally limited GPTs power in an attempt to manage AGI risk.
An interview with Nvidia CEO Jensen Huang about the impact of ChatGPT, Nvidia’s new cloud service, and how Nvidia is adapting to new geopolitical and competitive realities.