#ai
Lex successfully used ChatGPT to dispute a parking fine.
Cloud computing enabled the mobile revolution because when devices like the iPhone first launched, mobile hardware was not capable of doing much on device. By delegating complex work to the cloud, developers were able to be much more ambitious with their mobile apps.
Many GPT early adopters have noted that GPT seems to be a more useful search engine than Google for many types of queries.
Very imaginative ruminations on the future of AI. The articulation of our future super powers is particularly interesting.
This is an article I wrote for the Faster Times blog. With all of the hype around generative AI, especially ChatGPT, I decided to try to write it with the help of AI. Ultimately, I failed for a few reasons.
This Hacker News user is demonstrating how easily you can find people’s anonymous accounts by comparing writing styles.
The team behind this study has managed to reconstruct the images a person is looking at, using non-invasive brain recordings. Bridging the gap between electronic and biological computers could be highly impactful.
Some in the open source software community are unhappy with GitHub Co-pilot, an AI tool for developers, training their models on open source code.
This technology is a big deal. This is a great six-part series explaining how it all works.
AI innovation over the past decade has been focused on content recommendation/curation/moderation. Most of the value of this innovation has been captured by incumbents like Google, Meta, and Amazon. This is because this technology favoured players who were already serving a lot of content to a lot of people, could afford the costs of running these models, and the overall impact on technology here was only incremental.
Over the past two decades we’ve been making the worlds information more machine-readable by reformatting this data into APIs, open standards, and standardised file formats. It seems machines are going to be able to parse human-friendly (and historically machine-unfriendly) data much quicker than we can finish this transformation.
A great interview on China Talk regarding AI and war. This took place before the technology export controls recently announced by the US government.
A few months ago we were in awe at DALL-E, now video and three-dimensional projects are already launching. What an exciting space.
From the creators of DALL-E and GPT. The examples of what Whispir can do are pretty astounding.
A great technical demo (and paper) from Meta. Using only the sensors in the Quest headset (and reinforcement learning) they can recreate the users pose.
A great profile of Greg Rutkowski, an artist who is a more popular prompt than Picasso for AI-generated art (tools like DALL-E allow users to request an image in the style of an adequately prevalent artist). Does this encourage future artists to produce less art in order to avoid AI models copying them? In the future, will these tools offer a way for artists to omit their work (like no-follow in robots.txt for search engines)? While I don’t think these tools will kill art entirely, I’m sure they will harm commercial artists.
Action Transformer by Adept is an AI model that allows you to navigate and use websites and web apps using text commands. I had assumed that these types of features would come to voice assistants (like Alexa or Siri) via voice-first APIs, but this already looks much more capable than any voice assistant. So, maybe this is the technology that will make the capabilities of voice assistants more universal.
Some digital esoterica for the AI generation.
This is an astounding demo of a playable Pokemon game emulation powered by a neural network. This is obviously pretty terrible quality, but it’s still surprising just how good it is already. Imagine this in 10 years. The commentary provides a fantastic read.
A very interesting step-by-step walkthrough of how AI-generated graphics works.
The list of ideas for AI-first products at the bottom of this grant is particularly interesting. For example, how much of the work on UpWork can be automated?
I don’t think animals are ever going to want to talk to us, but it will be very interesting if ML upends the notion that animals don’t do a lot of talking to each other.
Research is locked up in poorly formatted, inconsistently structured PDFs. This makes training models much more difficult than training text or image models. Interesting to see how this is currently being tackled.