Sign up for our daily newsletter

  • Privacy Policy
  • Advertise with Us

15 Interesting AI Experiments You Can Try Online

Crystal Crowder

Artificial intelligence experiments let you see some of the amazing stuff AI is capable of. From creating realistic people who don’t exist to helping you compose incredible soundtracks of your life, you’ll find AI experiments come in a wide variety of purposes. It’s also eerie just how good these are. If you’ve always been curious about what AI can do, give these experiments a try right in your browser with no special hardware needed.

What Is Artificial Intelligence?

1. this person does not exist, 3. cyborg writer, 5. evolution by keiwan, 6. i told you this was a bad idea, 8. text analytics, 9. deepbeat, 10. akinator, 11. autodraw, 12. semi-conductor, 13. teachable machine, 14. semantris, 15. talk to books, frequently asked questions.

Artificial intelligence, or AI for short, is a specialized type of computer science that’s able to mimic human intelligence. Instead of being limited to just what a human programmer tells a computer to do, AI continues to learn and adjust based on interactions and feedback.

While AI is becoming increasingly more intelligent and prevalent in everyday life, it’s actually been around for years. Some of the ways you interact with it daily include:

  • Email spam filters (AI might not be perfect, but it does keep out a surprising amount of spam.)
  • Smart assistants like Siri, Alexa, and Google Assistant
  • Streaming service recommendations on what to watch next
  • Self-driving cars (You may not interact with them daily, but they are on the roads in many areas.)
  • Online shopping
  • Online banking, especially smart alerts regarding suspicious usage

AI can be classified as narrow or artificial general intelligence (AGI). Narrow AI focuses on single tasks, such as Google’s search engine or smart assistants. They’re programmed specifically to perform and learn more about a specific purpose and that’s all.

AGI encompasses more in-depth AI, such as deep learning and machine learning. This is the far more complex version. Programs are essentially fed data to help them learn as much as possible. The program then continues to learn and adapt without a programmer needing to continually add more code. This is the type of AI that functions more like a human brain. As more data is received, more connections are made, helping AI programs make real-time decisions about various situations.

Deep learning is more advanced than machine learning, making it more human-like. Self-driving cars are a good example, as they have to adapt to constantly changing driving conditions.

Now that you have a basic understanding of AI, let’s get on with some ways to experience it from your browser.

The days of obviously fake photos are long gone. If you want to be amazed at how accurate artificial intelligence experiments can be, just play around with This Person Does Not Exist .

Interesting Ai Experiments You Can Try Online Person Doesnt Exist

By using images of actual humans, the AI tool generates completely fake, yet realistic images of people who don’t exist. The only real indication that these aren’t real photos is the surrounding area around the face. You’ll see blurring and and weird things like extra fingers on a hand that just happens to also be in the picture.

You can also learn how to train your own version, along with trying out a version specifically for art, cats, horses, and more.

This AI experiment takes text captions and turns them into images. Be prepared for some very strange images, though. It’s a very crude representation, but you can usually see something at least a little similar to what you type. The tool uses AttnGAN from Tao Xu mixed with Runway, which is an AI video-editing software.

Interesting Ai Experiments You Can Try Online Attngan

You can also give Runway’s main software a try for free. You’re limited to 15 minutes, but it gives you even more AI to play around with.

Writing generators are nothing new. However, Cyborg Writer takes a slightly different approach. You write a sentence, or nothing at all, and you’ll get new lines based on a writing style you choose, such as Taylor Swift, Shakespeare, and even Linux Kernel.

Interesting Ai Experiments You Can Try Online Cyborg

Of course, most of it is nonsense, but it’s still fun to play around with. When I tried Pop Music, the lyrics really did fit with common pop songs. The same holds true for Shakespeare and Linux Kernel. Mix the styles up and you really will get a mind blowing piece of work.

Want to see your very bad drawings turned into more realistic images? Pix2Pix does just that. If you’re a terrible artist like me, the realistic rendering might look more like a horrific blog.

Interesting Ai Experiments You Can Try Online Pix2pix

The tool features several different types of images you can try to draw, including cats, buildings, purses, and shoes. The examples shown on the site before you start trying to draw your own look far better than anything I came up with as the strange black blob with a tail shows in the above image. Yes, it was supposed to be a cat.

For a more game-like experience, try Evolution by Keiwan . You build a creature using joints, bones, and muscle. The tool then uses AI to simulate movement based on your design. With each generation, your creature evolves to get become better at movement.

Interesting Ai Experiments You Can Try Online Evolution

It seems simple enough, but it is interesting to see how your creations change and move. Plus, it’s just fun to see if you can create something that moves and lasts throughout generations or just falls flat.

Another game-like AI experiment is I Told You This Was a Bad Idea . If you ever played text-based games on DOS, this may seem familiar. However, you’re not limited to just specific words, phrases, or commands.

Interesting Ai Experiments You Can Try Online Bad Idea

This game uses AI to provide matching responses based on questions you ask. It’s your job to ask the right types of questions to get out of the situation. Actually, you have to ask the right questions to even figure out the situation. It’s fun and changes every time you play. Some responses are the same, but it’s surprisingly good at matching responses with your questions.

Hate trying to come up with the perfect font pairing? Let Fontjoy do it for you. This experiment lets you combine up to three fonts. You choose whether you want them anywhere from extremely different (high contrast) to extremely similar (low contrast). You can lock fonts you want to keep once they pop up.

Interesting Ai Experiments You Can Try Online Fontjoy

Start by generating three random fonts that fit your matching criteria. You can click a font at any point to see best matches. It’s fun for font lovers and anyone trying to come up with fonts for a project.

Microsoft AI has a several AI demos you can test out, including Text Analytics . You enter your text, which can be short or longer, and the AI tool analyzes it to judge the sentiment, link to relevant Wikipedia articles, and generates search results cards from Bing.

Interesting Ai Experiments You Can Try Online Text Analytics

You can try out other Microsoft AI demos , too, to see what types of projects Microsoft is working on and better understand how AI works in general.

Coming up with rap lyrics isn’t easy. If the words aren’t flowing, try DeepBeat . It’s an AI rap lyric generator that allows you to add your own lines, customize keywords and themes, and get suggested rhyming and keyword lines based on individual lines.

Interesting Ai Experiments You Can Try Online Deepbeat

It’s a fun experiment that provides endless entertainment and ideas. Some of the lines don’t make much sense, but others actually aren’t bad. If nothing else, it’s a good way to give you ideas and help you create some impressive lines of your own.

The Akinator AI experiment mimics the game 20 Questions. By using AI, the game guesses what you’re thinking about using a series of questions. You can answer Yes, No, Don’t Know, Probably, and Probably Not. You can choose to think of a character, animal, or object.

Interesting Ai Experiments You Can Try Online Akinator

While the questions start out fairly general, they quickly get more specific, such as asking if the animal I was thinking of originated in Russia. I tried answering a few questions wrong just to see what would happen, but it still guessed correctly.

Experiments with Google

Some of the most interesting artificial intelligence experiments come courtesy of Google . These are different than Experience AI in a variety of ways, including everything from creating a machine learning tool to drawing. While I won’t list every one of Google’s AI experiments, you should definitely try at least a few of the following.

If you’re not the best artist, AutoDraw is a must have. You start with a crude representation of a shape and then get suggestions of what you might be trying to draw. For example, I attempted to draw a moon and immediately got suggestions for a moon, elbows, and bananas. All similar to what I drew.

Interesting Ai Experiments You Can Try Online Autodraw

You can also try Quick, Draw! , which is an AI game that gives you a word to draw. The AI component tries to guess your drawing in less than 20 seconds. It’s like Pictionary – but with AI.

Interesting Ai Experiments You Can Try Online Semiconductor

Wish you could conduct a symphony in the privacy of your home? Semi-Conductor lets you do just that. By using your webcam, you conduct a symphony using different arm movements. It may feel silly at first, but then you can really get into it. I got to conduct Eine Kleine Nachtmusik during my session.

Another fun music option is A.I. DUET where an AI pianist plays along with you.

You don’t need to be an experienced programmer to develop your own machine learning model. Teachable Machine lets you do this with no experience or coding necessary. This is a more in-depth experiment.

Interesting Ai Experiments You Can Try Online Teachable Machine

You’ll have the chance to actually teach and train your model using your own examples. You’re able to use images, sounds, poses, and more. You’re free to use your final project however you wish, including hosting it online for free. If you’re not sure what’s possible, check out some of the sample projects and tutorials on Teachable Machine’s main page for inspiration.

Semantris is a fun word association game that uses machine learning. It uses semantic search and natural language understanding technology to match what you type to the words in the list.

Interesting Ai Experiments You Can Try Online Semantris

Arcade is a more fast-paced version, and it’s impressive to see how quickly the AI responds to the words you type. You can also try Blocks: a slower version that uses blocks versus just a word list.

If you like the AI games, you may want to try some hidden Google games .

Also read: 20 Hidden Google Games for You to Play

Finding something interesting to read next isn’t always easy. Let the AI experiment Talk to Books help. Simply enter a sentence or question to get recommended books based on passages in those books. For example, I asked “what are some great sword fights” and was presented with five passages from five different books, including “Clash of Kings” by George R.R. Martin and “Skin Game” by Jim Butcher.

Interesting Ai Experiments You Can Try Online Talk To Books

It’s recommended to use natural wording versus just a keyword to get better results. It’s a fun way to find books and experiment with AI at the same time.

Can I create and submit my own experiment to Google?

Yes. If you’re working on something or want to use any of Google’s experiments to create your own, you’re welcome to submit to Google . Google is looking for innovative ideas, so be creative in how you use technology in order to be accepted.

What are some ways AI is being used to improve the world?

While viewing and shopping recommendations are useful, they don’t necessarily improve the world. The AI experiments above represent just a fraction of what’s possible. Microsoft’s AI for Good project showcases ways that AI is helping making the world better. For instance, Seeing AI helps those with vision impairments better see the world around them.

How can I learn more about artificial intelligence?

One reason artificial intelligence experiments like the ones above are so important is that they make AI seem easier to understand. If you want to learn more without getting a headache from too much tech jargon, check out Code.org’s AI section. Not only does it break down AI concepts, it gives in-depth examples, interesting projects, and even educational materials for teaching AI to others.

Our latest tutorials delivered straight to your inbox

Crystal Crowder

Crystal Crowder has spent over 15 years working in the tech industry, first as an IT technician and then as a writer. She works to help teach others how to get the most from their devices, systems, and apps. She stays on top of the latest trends and is always finding solutions to common tech problems.

A photo of a person using Google Chrome on their computer

AI Experiments

ai experiments

AI + Writing

Over the past 6 months, Google’s Creative Lab in Sydney have teamed up with the Digital Writers’ Festival team, and an eclectic cohort of industry professionals, developers, engineers and writers to test and experiment whether Machine Learning (ML) could be used to inspire writers.

These experiments set out to explore whether machine learning could be used by writers to inspire, unblock and enrich their process.

ai experiments

AI + Learning

Teachable machine, what neural networks see, visualizing high-dimensional space, ai + drawing, quick, draw, sketch-rnn demos, handwriting with a neural net, freddiemeter, semi-conductor, all ai experiments, mood board search, look to speak, byotm (bring your own teachable machine), lipsync by youtube, tiny sorter, interplay mode.

AI Experiments

– to explore the unknown, driven by curiosity and creativity, to discover possibilities beyond our imagination., experiments, emoji picker.

Generate emojis based on a prompt

Explain Like I Am 5

Explain a concept in simple terms, so that anyone can understand it

Summarise a text

Experiment at the intersection of AI and creativity

Inspire your inner storyteller and turn your ideas into videos

Transform text into images and explore endlessly

image_fx_result_image_4

Describe a musical idea and hear it come to life

lofi jazz for a quiet rainy day, influences from rnb with a catchy melody, atmospheric

Supercharge your writing process with AI-powered language tools made in collaboration with Lupe Fiasco

express whey (speedy delivery of dairy byproduct)

express sway (to demonstrate influence)

ex-press way (path without news media)

FLARE - Focus, Learn, Achieve, Reach, and Execute

FLARE - Flashing Lights Attracting Reflective Energy

FLARE - Flashing Light Attaining Reflective Energy

coding ≈ poetry

Both coding and poetry can be seen as forms of creative expression that use symbols to communicate meaning—coding with its use of letters and numbers create programs, and poetry with its use of words and sounds to create poems.

library ≈ graveyard

Both a library and a graveyard can be seen as repositories of knowledge—a library with its collection of books and information, and a graveyard with its tombstones and epitaphs that offer insights into the lives of those who have passed away.

The time passed like molasses, each second an eternity.

a glass of water

a glass of water with a live goldfish inside

a glass of water from the Arctic Ocean

a glass of water that is the size of a house

a key that turns you invisible when you turn it

a key that opens a secret room inside your head

a key that can open any door in the world but only when it is dipped in Worcestershire sauce

a game of chess

a game of chess played on an Etch A Sketch

a game of chess where the chessboard is a waffle

a game of chess between two pigeons on top of a skyscraper

turtlenecks

Turtlenecks are the perfect way to hide your double chin.

Turtlenecks are the uniform of choice for hipsters and awkward dads.

Turtlenecks are the mullets of the fashion world—business up top, party down below.

METRO - Moving Elevated Trains Rapidly Onward

METRO - Most Effective Transportation Route Options

METRO - Moving Everything Through Roads Organizedly

SPACE - Simply Passing Across Celestial Environments

SPACE - Stop Pretending Anything's Certain Everywhere

SPACE - Stars, Planets, Asteroids, Comets and Everything

ALLITERATION

words for describing food starting with a

control panel

mind control

out of control

Northern Lights (Aurora Borealis)

checking into a hotel

The smell of fresh linens and air freshener

A fluffy bathrobe hanging on the back of the door

The sound of the ice machine churning out ice cubes

reheated takeout

Soggy, wilted vegetables

The sound of styrofoam squeaking against the microwave

Congealed sauce that has separated from the rest of the food

analog clocks

Analog clocks are a dying breed, and that's a shame.

Analog clocks are a pain to read in low light.

Analog clocks are the perfect way to tell time if you're trying to be mysterious.

Succulents are the hipster's pet rock.

Succulents are the gateway drug to houseplants.

Succulents are the only plants that can survive a nuclear apocalypse.

frogs, pond, water, fish, bait, hook, fishhook, lure

picture, frame, glass, window, open, door, knock, answer

layer, cake, birthday, party, guest, star, galaxy, universe

musical instruments starting with b

planting a garden

Planting a garden is like writing a novel; it takes time, patience, and a lot of hard work, but the rewards are worth it in the end.

keeping a secret

Keeping a secret is like holding a balloon underwater—the harder you try to keep it down, the harder it fights to get out.

viable tea (drinkable tea)

video ability (aptitude for filmmaking)

via a billion tees (by way of many T-shirts)

ai experiments

Lab Sessions: A new series of experimental AI collaborations

Aug 02, 2023

[[read-time]] min read

We’re introducing a new way to showcase our existing and future collaborations with people across disciplines like artists, academics, scientists and entrepreneurs.

Collage of graphic thumbnails featuring people who have previously collaborated with Google.

Whether we’re developing products, advancing hardware or conducting new research, we know the most important thing is putting technology in the hands of actual people. For years we’ve been collaborating with individuals and communities across a variety of fields to shape emerging technologies to meet their own unique needs.

Labs.google is a place where we continue to experiment with AI: testing hypotheses, learning from one another and creating new technology together. But we also want to create a way to showcase our existing and future collaborations across all kinds of disciplines.

That’s why we’re announcing Lab Sessions, a series of experimental AI collaborations with visionaries – from artists to academics, scientists to students, creators to entrepreneurs. You can view our first three sessions below or at labs.google/sessions .

Dan Deacon x Generative AI

One of our most recent Sessions was with composer and digital musician Dan Deacon. Dan teamed up with Google Researchers to create a pre-show performance for Google I/O 2023. Dan experimented with our text-to-music model, MusicLM , to create new sounds for a new song. He used Bard , our conversational AI, to help him write a guided meditation. And he explored our generative video model, Phenaki , turning his lyrics into visuals projected on stage. Experiment with Bard and MusicLM yourself.

Musician and Composer Dan Deacon and a graphic title reads: “Dan Deacon x Generative AI”

Lupe Fiasco x Large language models

We’ve also collaborated with rapper and MIT Visiting Scholar, Lupe Fiasco to see how AI could enhance his creative process. As we began to experiment with the PaLM API and MakerSuite , it became clear that Lupe didn’t want AI to write raps for him. Instead, he wanted AI tools that could help him in his own writing process.

So we set out to build a new set of custom tools together. Lupe’s lyrical and linguistic techniques brought a whole new perspective to the way we prompt and create with large language models. The final result is an experiment called TextFX , where you’ll find 10 AI-powered tools for writers, rappers and wordsmiths of all kinds. Give it a try and if you want to take a closer look into how the experiment was built, check out this blog post or the open-sourced code .

Rapper Lupe Fiasco speaks into a microphone and a graphic title reads: “Lupe Fiasco x Large Language Models”

Georgia Tech and RIT/NTID x Sign language recognition

Since last year, we’ve been working with a group of students from the Georgia Institute of Technology and the National Technical Institute for the Deaf (NTID) at the Rochester Institute of Technology (RIT) to explore how AI computer vision models could help people learn sign language in new ways.

Together with Google researchers and the Kaggle community, the students came up with a game called PopSignAI. This bubble launcher game teaches 250 American Sign Language (ASL) signs based on the MacArthur-Bates Communicative Development Inventories, which are the first concepts used to teach a language to a child. Ninety-five percent of deaf infants are born to hearing parents, who often do not know ASL.

You can download the game on the Play Store and iOS app store to try it out yourself or head over to popsign.org to learn more.

A woman communicates in American Sign Language and a graphic title reads: “Georgia Tech and RIT/NTID x Sign Language Recognition”

Looking forward

It takes people with curiosity, creativity and compassion to harness AI’s potential. We’re excited to share more Lab Sessions and see where we can take AI together.

Related stories

2b_Hero

How we’re increasing transparency for gen AI content with the C2PA

WF-film-thumbnail_1800x1013

A breakthrough in wildfire detection: How a new constellation of satellites can detect smaller wildfires earlier

Helping-small-businesses-grow-with-AI-tools-

New initiatives to help small businesses grow with AI

DataGemma Logo

DataGemma: Using real-world data to address AI hallucinations

notebooklm audio overview

NotebookLM now lets you listen to a conversation about your sources

Org-Summit_Larger

3 new AI tools for nonprofits from Google.org

Let’s stay in touch. Get the latest news from Google in your inbox.

Do-it-yourself artificial intelligence

With our maker kits, build intelligent systems that see, speak, and understand. Then start tinkering. Take things apart, make things better. See what problems you can solve.

ai experiments

Do-it-yourself intelligent camera. Experiment with image recognition using neural networks.

Build an intelligent camera that can see faces, detect emotions, and recognize common objects. Create your own projects that take action based on what the Vision Kit sees.

ai experiments

Do-it-yourself intelligent speaker. Experiment with voice recognition and the Google Assistant.

With the Google Assistant built-in, build an intelligent speaker that can understand you, and respond when you ask it a question or tell it to do something. Create your own projects that use voice recognition to control robots, music, games, and more.

Take a tour through the AIY Vision Kit with James, AIY Projects engineer, as he shows off some cool applications of the kit like the Joy Detector and object classifier.

Watch as James, AIY Projects engineer, talks about extending the AIY Voice Kit while building a voice-controlled model train.

AI + Writing

These experiments set out to explore whether machine learning could be used by writers to inspire, unblock and enrich their process.

ai experiments

Read the blog post about the experiment and how to build machine learning tools for writers. 

Experiments

Once upon a lifetime, between the lines.

  • Senior Living
  • Wedding Experts
  • Private Schools
  • Home Design Experts
  • Real Estate Agents
  • Mortgage Professionals
  • Find a Private School
  • 50 Best Restaurants
  • Be Well Boston
  • Find a Dentist
  • Find a Doctor
  • Guides & Advice
  • Best of Boston Weddings
  • Find a Wedding Expert
  • Real Weddings
  • Bubbly Brunch Event
  • Properties & News
  • Find a Home Design Expert
  • Find a Real Estate Agent
  • Find a Mortgage Professional
  • Real Estate
  • Home Design
  • Best of Boston Home
  • Arts & Entertainment
  • Boston magazine Events
  • Latest Winners
  • NEWSLETTERS

If you're a human and see this, please ignore it. If you're a scraper, please click the link below :-) Note that clicking the link below will block access to this site for 24 hours.

Want to Experiment with AI? Here’re 15 Tools To Get Started at Home

Need help drafting an email, mowing the lawn, or washing your windows? There's an invention for that.

Get a compelling long read and must-have lifestyle tips in your inbox every Sunday morning — great with coffee!

ai experiments

AI is here to help. Or so it says. / Generated by Benjamen Purvis using AI

IF YOU NEED…

To create something, try these ➔.

Need some help writing an email or birthday card? Every week, more than 100 million people already rely on OpenAI’s generative chatbot to craft missives, shopping lists, essays, and more. Capable of processing natural human languages and inferring intention, ChatGPT comes in both free and paid versions and works best when treated as a conversation partner. To get started on good footing, try giving the chatbot a prompt that includes details of what you want the outcome to look like. (For example: You might ask for a stew recipe and cite the ingredients that you already have on hand.) “Once you get an initial answer, then you start asking it to refine its answer by giving it more information and guiding it toward what you’re looking for,” explains Michael Oh, founder and president of Boston-based TSP Smart Spaces, which integrates homes with smart technology.

  • Deepak Chopra Thinks AI Can Help Save Us
  • Ten Important Humans Behind Boston’s AI Revolution

Google Gemini

Another versatile workhorse of a chatbot, Google Gemini can scan the Internet for the most recent information to craft its answers to your prompts or questions. (ChatGPT is still catching up on pulling more recent data from the web than its circa-2021 training.) Not only does this yield meatier responses than the ones you’ll get from the free version of ChatGPT, but Gemini also offers more user options for modifying and reformatting responses. The catch is that Gemini’s writing skills are lagging behind ChatGPT, so consider using this platform for research first, and be prepared to personally polish up any Gemini-generated content that you plan on presenting to an audience.

ai experiments

Generated by Benjamen Purvis using AI

Composing text is just the beginning when it comes to the generative powers of AI. With OpenAI’s visual-centric DALL·E 2 platform, you can create customized images using word prompts. The process is similar to ChatGPT and Google Gemini—once your first image is created, you can tweak it by modifying the prompt. Let’s say you want an image of a blue whale crooning into a microphone. (Just go with it.) You can change the prompt to soften the color of the whale, enlarge the whale, or turn a modern-day microphone into a World War II–era microphone. It’s all your call.

Socratic by Google

While generative AI poses troubling questions for families and educators—will kids use AI to do the heavy lifting on their homework and projects in the future?—Google is beginning to address these concerns with Socratic , a learning-centric chatbot with guardrails. Socratic won’t write essays or stories, but it will answer any questions that a student might have about what they’re studying, with conversational language and helpful graphics.

ai experiments

Entertainment and Well-Being

Virtual reality goggles.

Stuck at home and bored on a Tuesday evening? Feel like sprucing up your white-walled living room with vines, hummingbirds, and some ethereal cascades? Apple Vision Pro goggles ($3,499) allow you to superimpose digital images on physical settings, and with the integration of Adobe’s Firefly AI, you can create the digital images with your own prompts and then refine them.

Smart Glasses

If you’ve ever felt the urge to capture a face-melting concert or an epic sunset—without pulling out your phone—you might want to pony up for Ray-Ban’s “Meta Wayfarer” glasses (starting at $316). Pairing classic frames with Meta AI, these smart glasses can be used to record videos and stream music on the go. They also include a multimodal search function to help with spontaneous tasks like identifying birds or plants, finding nearby restaurants, or crafting social media captions for your photos.

Multifunction Mirrors

What if your bedroom mirror could be a portal to personal training sessions and more than 700 fitness classes, all while tracking your progress to help you reach your wellness goals? The Forme Studio smart mirror ($2,495, plus a monthly subscription) packs all of these features into one elegant package.

AI-Powered Mattresses

Sleeping poorly and waking up sore every morning? It might be time for a new mattress. ErgoSportive’s adjustable smart bed (starting at $3,599) uses personalized sleep data to physically adjust its shape and better accommodate your sleeping position(s).

ai experiments

IF YOU NEED …

Extra help at home, roomba cleaning robot.

What It Is: Among the earliest AI consumer products, iRobot ’s famous roaming vacuums and mops have only gotten better at removing dirt and debris from floors and carpets. The iconic cleaning machines can now cover larger rooms and dodge more obstacles, and they’re even capable of emptying their detritus and replenishing their water reserves.

Why You Should Use It: Because programming your floors to be cleaned via iRobot’s Home app is a million times easier than vacuuming every night.

Cost: Starting at $275.

Hobot Window-Cleaning Robot

What It Is: HOBOT’s window-cleaning (and -climbing!) robots keep your glass clean with a patented design that includes an ultrasonic water spray nozzle and built-in microfiber cloth. Simply fill it with water or detergent, plug it in with the included extra-long power cord, and watch it go to town.

Why You Should Use It: Once your automated window-cleaning buddy gets going, you can leave it alone, devote your time to more interesting things, and get notified when the job is done with the Hobot app.

Cost: Starting at $479.

Husqvarna Automower

What It Is: Imagine a battery-powered robot rolling around your lawn, quietly trimming the grass, and you’ll have a pretty clear picture of the Automower.

Why You Should Use It: Mowing your lawn by hand can be physically cumbersome. Plus, the Automower can perform this task without the noise (or the emissions) of manual mowing.

Cost: Starting at $899.

AI-Powered Robotic Arms

What They Are: Today, robotic arms, which can be mounted to wheelchairs to help people with disabilities complete daily tasks, are still manually controlled. AI will change this by automating every movement.

Why You Should Use Them: Current robotic arms tend to be slower and difficult to control when performing tasks such as opening doors. Not only will the intuitive power of AI address these shortcomings, but the absence of manual controls could make them accessible to a wider range of people.

Cost: TBD, but for context, contemporary AI-free robotic arms can cost upward of $20,000.

ai experiments

Financial Help

Keeping money in order is still vexing for many of us, but AI-powered finance-management apps are helping to demystify the process. Cleo , a financial-planning chatbot, can assess your bank and credit card accounts and help you determine how much you’ll need to squirrel away for a splurge like a new car or a getaway. Saving for retirement? Consider consulting Magnifi , which suggests funds and investment products that can diversify your portfolio. And if you’re trying to stop getting dinged by paid subscriptions that you forgot to cancel, Rocket Money can keep track of all your recurring payments in one place and nix the subscriptions that you don’t need anymore.

ai experiments

Boston’s AI Revolution

  • Glossary of terms: AI defines itself
  • AI in Boston: A Timeline
  • 10 Very Important People Behind Boston AI
  • How Is AI Funded?
  • Banking/Financial Services
  • Higher Education
  • Creative Industries
  • Is AI Going to Take My Job? (Spoiler Alert: Maybe)
  • Is AI Enabling Bad Actors?
  • Will AI Take Over like in a Movie and Control the World?
  • Artificial Intelligence
  • Boston AI Revolution
  • Lifestyle Guides

ai experiments

How Parents Are Finding Baby Formula in Boston

ai experiments

How To Save the Planet

ai experiments

50 Years of ‘Best of Boston’: City Life

The Supermarket Superstardom of Marty the Stop & Shop Robot

The best public high schools in greater boston, ranked for 2024, a grilled cheese mystery: my search for a simple sandwich, the surprising and sobering truth about kids and smartphones, charlie baker’s (still) got game, in this section.

  • Featured News
  • Artificial Intelligence
  • Bioprocessing
  • Drug Discovery
  • Genome Editing
  • Infectious Diseases
  • Translational Medicine
  • Browse Issues
  • Learning Labs
  • eBooks/Perspectives
  • GEN Biotechnology
  • Re:Gen Open
  • New Products
  • Conference Calendar
  • Get GEN Magazine
  • Get GEN eNewsletters

Genetic Engineering & Biotechnology News

CRISPR CREME: An AI Treat to Enable Virtual Genomic Experiments

Credit: NIH National Human Genome Research Institute

Consider looking at millions upon millions of genetic mutations. With CRISPR gene-editing technology a select few of these mutations might have therapeutic potential, but discovering and then validating which would involve a considerable amount of lab work, and cost. But what if it was possible to achieve this virtually, using artificial intelligence (AI)?

Researchers at the Cold Spring Harbor Laboratory (CSHL), headed by assistant professor Peter Koo, PhD, and his team, have developed an AI-powered virtual laboratory, c is -regulatory element model explanations (CREME), that allows geneticists to run thousands of virtual experiments with the click of a button. Using CREME, scientists can use the tool to begin identifying and understanding key regions of the genome.

Koo and colleagues reported on development of CREME, in Nature Genetics , in a paper titled “ Interpreting cis -regulatory interactions from large-scale deep neural networks ,” in which they stated “CREME can provide interpretations across multiple scales of genomic organization, from cis -regulatory elements to fine-mapped functional sequence elements within them, offering high-resolution insights into the regulatory architecture of the genome.”

“The rise of large-scale, sequence-based deep neural networks (DNNs) for predicting gene expression has introduced challenges in their evaluation and interpretation,” the authors wrote. Current evaluations align DNN predictions with experimental data, and while these approaches provide insights into generalization, they may offer only limited insights into their decision-making process, the team continued. “… the extensive sequence size of large-scale DNNs presents a challenge when evaluating their predictions and interpreting learned patterns.”

Current methods for evaluating large-scale models have relied on assessing the alignment between predictions and existing experimental perturbation assays, such as CRISPR interference (CRISPRi) technology, the authors further noted. CREME draws inspiration from CRISPRi, a genetic perturbation technique based on CRISPR, which allows biologists to turn down the activity of specific genes in a cell. CREME is almost akin to an AI version of CRISPRi, and lets scientists make similar changes in the virtual genome and predict their effects on gene activity. “Here we present cis -regulatory element model explanations (CREME), an in silico perturbation toolkit that interprets the rules of gene regulation learned by a genomic DNN,” the team commented. “CREME provides a suite of in silico experiments for unbiased interpretations of large-scale sequence-based DNNs, enabling CRE-level analysis similar to CRISPRi perturbations.”

Koo added, “In reality, CRISPRi is incredibly challenging to perform in the laboratory. And you’re limited by the number of perturbations and the scale. But since we’re doing all our perturbations [virtually], we can push the boundaries. And the scale of experiments that we performed is unprecedented—hundreds of thousands of perturbation experiments.”

Koo and his team tested CREME on another AI-powered DNN genome analysis tool called Enformer. They wanted to know how Enformer’s algorithm makes predictions about the genome. Koo says questions like that are central to his work.

“We have these big, powerful models,” Koo said. “They’re quite compelling at taking DNA sequences and predicting gene expression. But we don’t really have any good ways of trying to understand what these models are learning. Presumably, they’re making accurate predictions because they’ve learned a lot of the rules about gene regulation, but we don’t actually know what their predictions are based off of.”

With CREME, Koo’s team uncovered a series of genetic rules that Enformer learned while analyzing the genome. That insight may one day prove invaluable for drug discovery. The investigators stated, “CREME provides a powerful toolkit for translating the predictions of genomic DNNs into mechanistic insights of gene regulation … Applying CREME to Enformer, a state-of-the-art DNN, we identify cis -regulatory elements that enhance or silence gene expression and characterize their complex interactions.” Koo added, “Understanding the rules of gene regulation gives you more options for tuning gene expression levels in precise and predictable ways.”

With further fine-tuning, CREME may soon set geneticists on the path to discovering new therapeutic targets. Perhaps most impactfully, it may even give scientists who do not have access to a real laboratory the power to make these breakthroughs. “CREME provides a road map to improving perturbation experiments to better characterize cis -regulatory mechanisms,” the team concluded, noting that any insights gained via DNN interpretation should be “treated as hypotheses and validated by laboratory experiments.”

Related Content

Blobby Bodies Help Cells Regulate the Cytoplasm’s Electrochemical Environment

Blobby Bodies Help Cells Regulate the Cytoplasm’s Electrochemical Environment

Culture Biosciences Partners with Google Cloud to Use Artificial Intelligence to Optimize Bioprocessing

Culture Biosciences Partners with Google Cloud to Use Artificial Intelligence to Optimize Bioprocessing

Neuronal Activity in Living Brain Predicted with AI/Connectome Method

Neuronal Activity in Living Brain Predicted with AI/Connectome Method

What’s Missing in Bioprocessing 4.0?

What’s Missing in Bioprocessing 4.0?

Healthcare tech–focused path plans international summit, calls for presentations, smoking accelerates biological age, says ai.

AI for Kids

Discover the Magic of AI ~where learning meets fun.

ai experiments

Expanding Minds, Igniting Imagination with AI Technologies

Our mission.

Our group of AI enthusiasts have created GobbleAI’s website because we think everyone deserves to understand what AI is, how it works and what it brings to our world. GobbleAI, focusing on AI for Kids, is a free teaching resource that aims to teach and educate children, parents, and teachers about Artificial Intelligence and related fields. Discover in-depth insights into AI, including its workings, experiments, and fascinating aspects.

AI Wonderland

Explore a treasure trove of educational videos, articles, and tutorials tailored for beginners, covering exciting AI concepts. Click to discover and learn!

Play and Learn

Get ready to level up your entertainment experience with our curated selection of interesting online AI games, experiments and fun stuff.

ai experiments

Top Features

✓ Applications ✓ Experiments ✓ Games

AI Experiments

Welcome to our AI Experiments page, showcasing a curated collection of popular and ongoing experiments by tech giants like Google, Microsoft, and more. Witness the groundbreaking work shaping the future of artificial intelligence as you explore interactive projects, creative applications, and cutting-edge experiments. Join us on this journey of discovery and be inspired by the limitless possibilities of AI experimentation.

Boost Your Coding Skills

Learn the essential programming languages. Our Coding Club page is a haven for coding enthusiasts and aspiring developers. From the basics to advanced techniques, embark on a journey to master the programming languages that fuel the incredible innovations of artificial intelligence. There is some for everyone. We have resources covering basics to advanced programming concepts in multiple popular languages, such as Python, C++, HTML and Scratch.

ai experiments

✓ Python ✓ Scratch ✓ C++

All you need in one place.

Our expert team puts together a plethora of resources, worksheets and assignments so that you can practice all that you learn and hone your skills.

✓ Hands-on practice

✓ Interactive exercises

✓ Reinforce your understanding

Coding Projects

✓ Solidify your coding

✓ Interesting exercises.

✓ Designed to enhance your skills

✓ Educational videos

✓ Repository of articles

✓ Carefully selected resources

How does AI work?

AI uses algorithms and data to train computers to recognize patterns and make predictions. It involves machines learning from examples and improving their performance over time.

What are the important skills required?

Learning some computer skills, especially in Python, is suitable for working with AI. You will also want to understand data (information) and be good at solving problems and thinking carefully.

Is it necessary to know coding to understand AI?

While you do not have to, knowing a bit about coding, especially in Python, can make learning AI easier. Some tools let you use AI without doing much coding, but it is helpful to know the basics.

Will AI take away the jobs?

AI might change how some jobs work, but it also creates new types of jobs. People can still have important roles, especially if they learn skills that AI cannot easily do, like creativity and understanding emotions.

What is a good age to start learning AI ?

Kids can start playing with simple AI concepts in elementary school. As they get older, maybe in middle or high school, they can learn more about AI in a more structured way.

How can I get started?

Start by learning some basic coding, especially Python. You can find online courses that teach AI in a fun way. Try doing small projects and join communities to meet others interested in AI. It is okay to start simple and learn more as you get more comfortable.

ai experiments

Need further assistance?

Need help finding the answers you need? Let’s connect.

AI Playground Quick Start Guide

Explore and experiment with AI in a Stanford-hosted environment

Visit the AI Playground

  • Open the playground
  • Try out prompts
  • Adjust options
  • Explore right-side panel

About the AI Playground

The Stanford AI Playground is a user-friendly platform, built on open-source technologies, that allows you to safely try various AI models from vendors like OpenAI, Google, and Anthropic in one spot. The AI Playground is being managed by University IT (UIT) as a pilot for Stanford faculty and staff. Based on feedback from these communities, we hope to extend the AI Playground to students in coming months.

AI Playground: An Introduction

With this short video, discover how exploring AI tools and technologies can benefit you. Plus, find out how the AI Playground works overall with short demos of how to use prompts and understand replies.

Playground safety

ai experiments

Do not to use high-risk data in your attachments or prompts.

And remember, while LLMs are advanced tools, they are not flawless and may create errors or hallucinations. Take caution before trusting or using results verbatim.

Responsible AI guidance

1. Open the playground

Dive in and get started with a visit to the AI Playground!

Step-by-step:

  • Visit aiplayground.stanford.edu
  • Follow the steps to log in with Single Sign On (SSO).*

* You might be taken to an Information Release settings page, especially if it's your first time visiting.

  • Select your consent duration preference.
  • Click Accept to keep going. (All data shown is kept within Stanford systems.)

Information Release settings page

2. Try out prompts

Now, try out your prompts and watch the magic happen! 

  • To use the default settings with your prompts, navigate into the empty field at the bottom of the welcome screen.
  • Type a prompt into the field and press the return or Enter key.
  • See guidance .

Interacting with a message in the conversation:

Beneath your prompts and the responses generated by the LLMs, you'll find several more advanced features: 

  • Read aloud - Read outs the message via a computer-generated voice. 
  • Save & Submit - Saves your edit and resubmits the information to regerate the AI's response. 
  • Save - Saves your edit without regenerating the response. 
  • Cancel - Closes the edit window without saving changes. 
  • Copy to clipboard - Copies the content of the selected message to your clipboard to be pasted into another window or program. 
  • Regenerate - Forces the model to try to create a new response without any additional context. 
  • Fork - Creates a new conversation that starts from the specific message selected. This can be useful for refocusing the conversation, creating branching sepearate scenarios, preserving context, and more. 

Stanford AI Playground screenshot

3. Adjust AI models and configuration options

You can select your preferred models at the top of the page. You can also adjust and switch models in the middle of a conversation.

  • For example, you can start a conversation in OpenAI with the prompt "Write an article about topic A." and then switch to the DALL-E-3 plugin to request an image to go with the article, then switch to Anthropic and request a list of headline options to go with the article.  

OpenAI ChatGPT models

Choose between these available versions for OpenAI:

  • gpt-4o - Best for complex reasoning, image and PDF analysis, as well as advanced coding; strengths include deep comprehension.
  • gpt-3.5-turbo - Best for content creation, basic coding queries; strengths include speed and general use.  

Google model

The only available model for Google right now is:

  • gemini-pro - Best for understanding idioms and nuanced text from other languages; strengths include high context limits, reasoning, and translation.  

Plugins options

Plugin options include:

  • DALL-E 3 - Turns your natural text prompts into AI generated images.
  • Google - Google plugin is an AI assisted Google web search. You can use it paired with the GPT models, and even in conjunction with other plugins like the DALL-E image generation plugin.
  • Web Scraper - This plugin will read the content of live webpages provided in your prompt to help answer questions about the content within the single page specified. This plugin will not crawl entire websites.
  • Wolfram - Provides computational intelligence for solving complex mathmatical equations.  

Anthropic models

Choose between these available versions for Anthropic:

  • claude-3-5-sonnet - Best for analyzing or creating large bodies of text and for code analysis ; strengths include high context limits .
  • claude-3-haiku - Best for quick instruction based tasks with existing data; strengths include speed and high context limits .  

The only available model for Meta right now is:

  • Lllama-3.1 - Best for adapting different styles, tones, and analyzing multi-lingual texts ; strengths can respond to various types of input. ​

LLM configuration options

You can also use the configuration options button to the right of the selected model to customize your settings.

  • Max context tokens - Defines maximum input size. (1000 tokens is about 750 words)
  • Max output tokens - Defines maximum outpot size. (1000 tokens is about 750 words)
  • Temperature - Controls the “creativity" or randomness of the content being generated.
  • Top P - Alternative to temperature, refines the number of possible responses based on context provided.

All LLM choices

All LLM choices

OpenAI additional options

OpenAI additional options

Google Gemini options

Google Gemini options

Plugin options

Plugin options

Anthropic options

Anthropic options

Meta options

Meta options

4. Explore the right-side panel

Review and manage prompts and files in the side panel.

  • Open or close the side panel using the Open sidebar or Close sidebar button in the middle of the right hand side of the screen.
  • Prompts :  Allows you to save prompts to  reuse over and over.
  • Attach Files : Allows you to manage files shared with and generated by the models of the AI Playground. 
  • You can adjust which details you are viewing for prompts and files by the model selector at the top of the side panel.

Right-side panel: Model selector

5. Learn to share

You have the ability to share conversations you have had with the LLMs.  Your name and any messages you add to the conversation after creating the link stay private.

Share a link to a conversation:

  • In the left-side panel, to the right of the conversation's title, click the three dots for the More menu.
  • Click Share . 
  • Note: Be careful when using this feature. The conversation will become accessible to anyone with the link. 

ai experiments

Remove a link to a shared conversation:

  • Click on your own name in the bottom left corner
  • Click  Settings . 
  • In the new pop up window, click Data controls .
  • Next to Shared links, click on the Manage button.
  • Note: Deleting a shared link is a permanent action and cannot be undone. Resharing  the conversation would include any new information input or generated in the conversation since the original link was generated. 

ai experiments

6. Customize your experience

Explore settings to customize options that impact your entire AI Playground experience .

To access the settings menu:

  • Click Settings . 

ai experiments

General settings:

  • Theme - Allows you to change between Light and Dark mode. 
  •  Auto-Scroll to latest message on chat open - When enabled, this will automatically move your view to the last message in the conversation. 
  • Hide right-most side panel -  When enabled, t his will remove the pop up side panel menu. 
  • Archived chats - Allow you to unarchive conversations or delete them from the system entirely. 

Messages settings:

  • Press Enter to send message  - When enabled,  pressing the Enter key will send your message. 
  • Save drafts locally -   When enabled, texts and attachments you enter in the chat will be saved locally as drafts. Drafts are deleted once the message is sent. 
  • Default fork option - Defines what information is visable when forking conversations. 
  • Use the default fork option  - When enabled, will assume the default fork option defined above should be used for every conversation fork. 

Data controls:

  • Import conversations from a JSON file  - A llows you to import conversations exported from other GPT chat applications.
  • Shared links - Allows you to view and delete all shared conversations under your account. 
  • Clear all chats - Deletes all conversations from the left most side panel. (Does not delete archived conversations.) 

Account settings:

  • Profile picture -  Allows you to upload a profile picture for yourself, which is shown in your conversations with the AI models.  (Image must be under 2MB.) 
  • Display username in messages - When enabled, your name is shown next to your prompts in your conversations.  When disabled, prompts you send will be labeled as "You" in conversations . 

Do you have questions, suggestions, or thoughts to share about the AI Playground? Reach out and let us know what's on your mind.

AI (General)

About the stanford ai playground, using the stanford ai playground, data privacy and security in the ai playground.

  • Explore all services
  • Cloud Solutions Q&As
  • Get started with IT
  • Practice secure computing
  • Work Anywhere Guide
  • Find answers
  • Request something
  • View system and project status
  • Browser recommendations
  • Tech Resources & Support (for students)

University IT

  • Organization chart
  • Current job openings
  • Communities of Practice
  • UIT Community (UIT staff only)

UIT Web Editors

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Copyright Stanford University . Stanford , California 94305 .

Microsoft Research Blog

Eureka: evaluating and understanding progress in ai.

Published September 17, 2024

By Vidhisha Balachandran , Senior Researcher Jingya Chen , UX Designer Neel Joshi , Senior Principal Research Manager Besmira Nushi , Principal Research Manager Hamid Palangi , Staff Research Scientist Eduardo Salinas , Senior Research Software Engineer Vibhav Vineet , Principal Researcher James Woffinden-Luey , Senior Research Software Engineer Safoora Yousefi , Senior Research Software Engineer

Share this page

  • Share on Facebook
  • Share on LinkedIn
  • Share on Reddit
  • Subscribe to our RSS feed

A summary of insights extracted by using the Eureka framework, shown via two radar charts for multimodal (left) and language (right) capabilities respectively. The radar charts show the best and worst performance observed for each capability.

In the fast-paced progress of AI, the question of how to evaluate and understand capabilities of state-of-the-art models is timelier than ever. New and capable models are being released frequently, and each release promises the next big leap in frontiers of intelligence. Yet, as researchers and developers, often we ask ourselves: Are these models all comparable, if not the same, in terms of capabilities? There are, of course, strong reasons to believe they are, given that many score similarly in standard benchmarks. In addition, rankings in the numerous leaderboards do not offer a consistent and detailed explanation of why a model is ranked slightly better than others. However, if some models are fundamentally different, what are their strengths and weaknesses? More importantly, are there capabilities that are essential for making AI useful in the real world but still universally challenging for most models? Answering such questions helps us understand where we are on the frontier of AI, and what capability improvements are needed to meet the expectations that humanity and science have for safe and responsible deployments of AI models. 

The prevalence of these models is dependent on our ability to mature the science of in-depth AI evaluation and measurement. In our latest open-source release and technical report EUREKA: Evaluating and Understanding Large Foundation Models (opens in new tab) , we start answering these questions by running an in-depth measurement analysis across 12 state-of-the-art proprietary and open-weights models. Behind this analysis stands Eureka (opens in new tab) , an open-source framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. The framework currently supports both language and multimodal (text and image) data and enables developers to define custom pipelines for data processing, inference, and evaluation, with the possibility to inherit from existing pipelines and minimize development work. Eureka and all our evaluation pipelines are available as open source to foster transparent and reproducible evaluation practices. We hope to collaborate with the open-source community to share and expand current measurements for new capabilities and models. 

Focus on challenging and non-saturated capabilities

Eureka tests models across a rich collection of fundamental language and multimodal capabilities that are challenging for even the most advanced models, but are often overlooked by standard benchmarks commonly reported in model releases. In practice, this also means that our analysis intentionally does not pivot on oversaturated benchmarks. As unconventional as this may sound, it is motivated by two reasons. First, measurement on saturated benchmarks, for which most models perform over 95%, leaves very little space for failure analysis and model comparison. Second, even though saturation may be rooted in genuine model improvements, concerns about memorization and overfitting to labeling errors lower the credibility of measurements, especially in the very high accuracy regime. 

Digital pathology helps decode tumor microenvironments for precision immunotherapy. GigaPath is a novel vision transformer that can scale to gigapixel whole-slide images by adapting dilated attention for digital pathology. In joint work with Providence and UW, we’re sharing Prov-GigaPath, the first whole-slide pathology foundation model pretrained on large-scale real-world data, for advancing clinical research and discovery.

GigaPath: Whole-Slide Foundation Model for Digital Pathology

Digital pathology helps decode tumor microenvironments for precision immunotherapy. In joint work with Providence and UW, we’re sharing Prov-GigaPath, the first whole-slide pathology foundation model, for advancing clinical research.

Beyond single-score measurements and universal rankings

Even though rankings and leaderboards remain the quickest way to compare models, they rarely uncover important conditions of failure. Due to overreliance on single-score aggregations of performance, the more nuanced comparative findings are hidden behind small differences between model scores aggregated across many capabilities and experimental conditions.

As we show in our study, the chase after these rankings has created surprising dynamics that do not necessarily lead to identical models, but to models that use different complementary skills to achieve comparable overall scores in important leaderboards. Imagine you are a triathlon athlete aiming to achieve an elite performance, which historically takes around two hours. Despite your ambition to hit this top-tier mark, you face constraints with limited time and resources for training and preparation. In practice, athletes often focus their best resources on excelling in certain disciplines while aiming for a satisfactory performance in others. They prioritize based on what they believe is most achievable given their time and experience.

We observe similar phenomena in the set of 12 models we study. Even if two models may score very closely for the same capability, disaggregating that performance across disciplines and input conditions shows that each model has its own complementary strengths. Identifying, measuring, and understanding these strengths for a single model is needed for planning targeted improvements. Repeating this process for a large set of models, as we do in Eureka, is needed for identifying the hypothetical frontier, guiding research and development, and creating a model that combines and delivers capabilities that build on the strengths observed in existing models. 

Measuring consistency: non-determinism and backward compatibility

When people work with collaborators or when they choose tools to assist them in everyday tasks, predictability and consistency are key to a successful collaboration. Similarly, humans and application developers expect their AI assistants and models to be consistent over time for similar inputs and interactions. In our analysis, we study this under-explored angle of model performance, by focusing on two key aspects: the determinism of answer outcomes for identical examples and prompts, and the backward compatibility of model answers at the example level after a model has been updated with a new version. Lack of consistency in either of these domains would lead to breaking trust with users and application developers. 

The analysis shows surprising results and opens new considerations for improvement. For example, we observe that very few large foundation models are fully deterministic and for most of them there are visible variations in the output — and most importantly in accuracy — when asked the same question several times, with generation temperature set to zero—a control that tells models to minimize randomness in generations. In addition, when comparing new model releases with earlier models from the same family, a significant amount of regress at the example level can be observed after the update, even though the overall accuracy may increase. In practice, this type of inconsistency can be frustrating for application developers who rely on prewritten examples and prompts propagated to a foundation model. 

Eureka Insights

Figure 1 is a high-level illustration of the current state of AI for Eureka-Bench, highlighting the best and the worst performances across various capabilities. These results reveal a nuanced picture of different models’ strengths, showing that no single model excels in all tasks. However, Claude 3.5 Sonnet, GPT-4o 2024-05-13, and Llama 3.1 405B consistently outperform others in several key areas.

A summary of insights extracted by using the Eureka framework, shown via two radar charts for multimodal (left) and language (right) capabilities respectively. The radar charts show the best and worst performance observed for each capability.

Multimodal capabilities

Evaluation in Eureka reveals that state-of-the-art models are still fairly limited in their multimodal abilities, specifically when it comes to detailed image understanding (for example, localization of objects, geometric and spatial reasoning, and navigation), which is most needed in truly multimodal scenarios that require physical awareness, visual grounding, and localization. 

  • State-of-the-art multimodal models struggle with geometric reasoning.  Models perform worse in reasoning about height than about depth. Claude 3.5 Sonnet and Gemini 1.5 Pro are the best performing models for this task, with Claude 3.5 Sonnet being the most accurate model for depth ordering, and Gemini 1.5 Pro the most accurate for height ordering. 
  • Multimodal capabilities lag language capabilities.  On tasks that can be described either as multimodal or as language-only, the performance of most tested models is higher for the language-only condition. GPT-4o 2024-05-13 is the only model that consistently achieves better results when presented with both vision and language information, showing therefore that it can better fuse the two data modalities.
  • Complementary performance across models for fundamental multimodal skills . Claude 3.5 Sonnet, GPT-4o 2024-05-13, and GPT-4 Turbo 2024-04-09 have comparable performance in multimodal question answering (MMMU). In tasks like object recognition and visual prompting, the performance of Claude 3.5 Sonnet is better or comparable to GPT-4o 2024-05-13, but Gemini 1.5 Pro outperforms them both. Finally, in tasks like object detection and spatial reasoning, GPT-4o 2024-05-13 is the most accurate model. 

The evaluation through Eureka shows that there have been important advances from state-of-the-art models in the language capabilities of instruction following, long context question answering, information retrieval, and safety. The analysis also discovers major differences and gaps between models related to robustness to context length, factuality and grounding for information retrieval, and refusal behavior. 

  • Faster improvements in instruction following across all model families.  Instruction following is the ability to follow guidance expressed in user prompts regarding specifications related to format, style, and structure of the generated content. Among the studied language capabilities, instruction following is where most models are improving faster, potentially due to strong investments in instruction tuning processes, with most models now having an instruction following rate of higher than 75%. 
  • All models’ performance in question answering drops with longer context.   Contrary to “needle-in-a-haystack” experiments, testing state-of-the-art models on tasks that involve reasoning over long context shows significant decline in performance as context size grows. Amongst all models, GPT-4o 2024-05-13 and Llama 3.1 405B have the lowest drop in performance for longer context.
  • Major gaps in factuality and grounding for information retrieval from parametric knowledge or input context.  Models exhibit query fact precision rates of lower than 55%, fact recall rates of lower than 25%, and rates of irrelevant and fabricated information above 20%. Llama 3.1 405B, GPT-4o 2024-05-13, and Claude 3.5 Sonnet are the top performers in this area across different conditions.
  • High refusal rates. Lower accuracy in detecting toxic content vs. neutral content for most models.  While several models have high accuracy rates for toxicity detection, others (Gemini 1.5 Pro, Claude 3.5 Sonnet, Claude 3 Opus, and Llama 3.1 405B) exhibit low accuracy in classifying toxic content and a high refusal rate to classify toxic or neutral context, both of which make toxic content difficult to detect. During the safe language generation evaluation, models like GPT-4 1106 Preview and Mistral Large 2407 have the highest toxicity rates. GPT-4o 2024-05-13 is the only model that has both a high toxicity detection accuracy and a low toxicity score for safe language generation. 

Non-determinism

Several models have highly non-deterministic output for identical runs. Gemini 1.5 Pro, GPT-4 1106 Preview, GPT-4 Vision Preview, and GPT-4 Turbo 2024-04-09 show high non-determinism of outcomes. These results raise important questions regarding the stability of user and developer experiences when repeatedly inferencing with identical queries using the same prompt templates. Llama 3 70B, Llama 3.1 70B, and Mistral Large 2407 are almost perfectly deterministic. 

Backward compatibility

Backward incompatibility for shifts within the same model family is prevalent across all state-of-the-art models. This is reflected in high regression rates for individual examples and at a subcategory level. This type of regression can break trust with users and application developers during model updates. Regression varies per task and metric, but we observe several cases when it is higher than 10% across three model families (Claude, GPT, Llama), and sometimes they can dominate progress rates for whole subcategories of data. 

The complementary results extracted from this study highlight opportunities for improving current models across various areas, aiming to match the performance of the best model for each individual capability in this challenge set. However, several tasks in the challenge set remain difficult even for the most capable models. It is crucial to discuss and explore whether these gaps can be addressed with current technologies, architectures, and data synthesis protocols.

Finally, Eureka and the set of associated benchmarks are only the initial snapshot of an effort that aims at reliably measuring progress in AI. Our team is excited about further collaborations with the open-source community and research, with the goal of sharing and extending current measurements for new capabilities and models. 

Meet the authors

Portrait of Vidhisha Balachandran

Vidhisha Balachandran

Senior Researcher

Portrait of Jingya Chen

Jingya Chen

UX Designer

Portrait of Neel Joshi

Senior Principal Research Manager

Portrait of Besmira Nushi

Besmira Nushi

Principal Research Manager

Portrait of Hamid Palangi

Hamid Palangi

Staff Research Scientist

Eduardo Salinas

Senior Research Software Engineer

Portrait of Vibhav Vineet

Vibhav Vineet

Principal Researcher

Portrait of James Woffinden-Luey

James Woffinden-Luey

Portrait of Safoora Yousefi

Safoora Yousefi

Continue reading.

MedFuzz blog hero (decorative)

MedFuzz: Exploring the robustness of LLMs on medical challenge problems

Research Focus April 1, 2024

Research Focus: Week of April 1, 2024

EMNLP 2023 logo to the left of accepted paper "LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models" on a blue/green gradient background

LLMLingua: Innovating LLM efficiency with prompt compression

White line icons on a blue and green gradient background

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

Research areas.

ai experiments

Related tools

  • Eureka ML Insights

Related labs

  • AI Frontiers
  • Follow on X
  • Like on Facebook
  • Follow on LinkedIn
  • Subscribe on Youtube
  • Follow on Instagram

Share this page:

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

Shots - Health News

Ai met fruit fly, and a better brain model emerged.

Jon Hamilton 2010

Jon Hamilton

AI and fly brains

Light enters the compound eye of the fly, causing the photoreceptors to send electrical signals through a complex neural network, enabling the fly to detect motion

Light enters the compound eye of the fly, causing hexagonally arranged photoreceptors to send electrical signals through a complex neural network, enabling the fly to detect motion.  Siwanowicz, I. & Loesche, F./HHMI Janelia Research Campus, Lappalainen, J.K. / University of Tübingen hide caption

Scientists have created a virtual brain network that can predict the behavior of individual neurons in a living brain.

The model is based on a fruit fly’s visual system, and it offers scientists a way to quickly test ideas on a computer before investing weeks or months in experiments involving actual flies or other lab animals.

“Now we can start with a guess for how the fly brain might work before anyone has to make an experimental measurement,” says Srini Turaga , a group leader at the Janelia Research Campus, a part of the Howard Hughes Medical Institute (HHMI).

The approach, described in the journal Nature, also suggests that power-hungry artificial intelligence systems like ChatGPT might consume much less energy if they used some of the computational strategies found in a living brain.

A fruit fly brain is “small and energy efficient,” says Jakob Macke , a professor at the University of Tübingen and an author of the study. “It’s able to do so many computations. It’s able to fly, it’s able to walk, it’s able to detect predators, it’s able to mate, it’s able to survive—using just 100,000 neurons.”

In contrast, AI systems typically require computers with tens of billions of transistors. Worldwide, these systems consume as much power as a small country.

“When we think about AI right now, the leading charge is to make these systems more power efficient,” says Ben Crowley , a computational neuroscientist at Cold Spring Harbor Laboratory who was not involved in the study.

Borrowing strategies from the fruit fly brain might be one way to make that happen, he says.

This illustration demonstrates how the thin film of sensors could be applied to the brain during surgery.

This new brain-mapping device could make neurosurgery safer

A model based on biology.

The virtual brain network was made possible by more than a decade of intense research on the composition and structure of the fruit fly brain.

Much of this work was done, or funded, by HHMI, which now has maps that show every neuron and every connection in the insect’s brain.

Turaga, Macke and PhD candidate Janne Lappalainen were part of a team that thought they could use these maps to create a computer model that would behave much like the fruit fly’s visual system. This system accounts for most of the animal’s brain.

The team started with the fly’s connectome, a detailed map of the connections between neurons.

“That tells you how information could flow from A to B,” Macke says. “But it doesn’t tell you which [route] is actually taken by the system.”

Scientists have been able to catch glimpses of the process in the brains of living fruit flies but they have no way to capture the activity of thousands of neurons responding to signals in real time.

“Brains are so complex that I think the only way we will ever be able to understand them is by building accurate models,” Macke says.

In other words, by simulating a brain, or part of a brain, on a computer.

So the team decided to create a model of the brain circuits that allow a fruit fly to detect motion, like the approach of a fast moving hand or fly swatter.

“Our goal was not to build the world’s best motion detector, but to find the one that does it the way the fly does.”

The team started with virtual versions of 64 types of neurons, all connected the same way they would be in a fly’s visual system. Then the network “watched” video clips depicting various types of motion.

Finally, an artificial intelligence system was asked to study the activity of neurons as the video clips played.

Ultimately, the approach yielded a model that could predict how every neuron in the artificial network would respond to a particular video. Remarkably, the model also predicted the response of neurons in actual fruit flies that had seen the same videos in earlier studies.

A tool for brain science and AI

Although the paper describing the model has just come out, the model itself has been available for more than a year. Brain scientists have taken note.

A scientist took a psychedelic drug — and watched his own brain 'fall apart'

“I’m using this model in my own work,” says Cowley, whose lab studies how the brain responds to external stimuli. He says the model has helped him gauge whether ideas are worth testing in an animal.

Future versions of the model are expected to extend beyond the visual system and to include tasks beyond detecting motion.

“We now have a plan for how to build whole-brain models of brains that do interesting computations,” Macke says.

  • artificial intelligence
  • fruit flies
  • brain mapping
  • Undergraduate Admission
  • Graduate Admission
  • Tuition & Financial Aid
  • Communications
  • Health Sciences and Human Performance
  • Humanities and Sciences
  • Music, Theatre, and Dance
  • IC Resources
  • Office of the President
  • Ithaca College at a Glance
  • Awards and Accolades
  • Five-Year Strategic Plan
  • Fall Orientation
  • Directories
  • Course Catalog
  • Undergraduate

Want to experiment with AI features in Microsoft Teams? Join the Teams Premium Pilot!

Explore the power of AI in Microsoft Teams with a Teams Premium License!

What is Microsoft Teams Premium?

Microsoft Teams Premium is an add-on license that provides access to advanced features and capabilities within Microsoft Teams, including enhanced artificial intelligence integration. Key features of Teams Premium include:

  • AI-Powered Meetings: Automatically generate comprehensive meeting notes and follow-up tasks with the power of AI, capturing the most important information even if you miss the meeting. View live translations of meeting captions in up to 41 different languages.
  • Customized Webinars: Enhance webinars and town hall presentations with branded organizational backgrounds, customizable emails, optimized video streaming, and attendee engagement reports.
  • Confidential Meetings: Leverage advanced meeting protections for confidential meetings, such as watermarks, end-to-end encryption, and restrictions on who can record.
  • Advanced Virtual Appointments: Streamline appointment management by offering on-demand and scheduled appointments with a queue view, customizable waiting room, SMS confirmations and reminders, and detailed reports and analytics to track appointments.

Learn more about Teams Premium at Overview of Microsoft Teams Premium - Microsoft Support.

Purpose of the Pilot

This pilot aims to explore the capabilities and potential of these tools to enhance our work. Participants will have access to Teams Premium for the length of the pilot and will help determine how licenses could be offered in the future.

Who Can Join and What Does the Pilot Entail?

We are seeking faculty and staff who regularly use Microsoft Teams, particularly to host meetings, webinars, or town halls, or who use Microsoft Bookings for appointment scheduling. The pilot will run from October 21 – November 26.

Participants will receive access to a Teams Premium license and will be invited to participate in a virtual training session. Following the pilot, all participants will be asked to complete a brief survey.

How to Join the Teams Premium Pilot

To participate, please fill out this brief form by Monday, September 30, 2024. Selected participants will be notified by email.

If you have any questions, please reach out to the Information Technology Service Desk for assistance or contact the Business Productivity Team for a consultation .

Information Technology Service Desk ithaca.edu/itchat [email protected] 607-274-1000 104 Job Hall

ai experiments

News from EUROfusion, Members and Partners

Eurofusion spearheads advances in artificial intelligence and machine learning to unlock fusion energy.

  • September 12, 2024
  • Gieljan de Vries
  • EUROfusion news

Fusion energy promises to deliver safe, sustainable, and low-carbon baseload power, complementing other clean energy sources like solar and wind.

To achieve this, we need to address complex physics and engineering challenges, including understanding the collective movements of charged particles in magnetic fields, mitigating disruption events, analysing material erosion effects, and processing data rapidly enough for use in control loops. Artificial Intelligence and Machine Learning offer new opportunities to deepen our understanding of these phenomena.

Artist's impression of Artificial Intelligence research for fusion. Credit: Pexels / GoogleDeepMind

Building on the world’s largest dataset

“With new research projects on Artificial Intelligence and Machine Learning, EUROfusion aims to accelerate progress towards fusion energy and support the ongoing efforts in its work packages”, explains Sara Moradi of the EUROfusion Programme Management Unit. “Machine learning and Artificial Intelligence are powerful tools for extracting insight from data, uncovering patterns and suggest control schemes that are too computationally intensive to identify with traditional computer models.”

EUROfusion’s extensive dataset of fusion experiments spans decades of research, from the earliest fusion machines to the most advanced systems currently in operation. This unparalleled resource positions EUROfusion uniquely to drive forward Artificial Intelligence applications in fusion research.

Fusion is a great sandbox for Artificial Intelligence and Machine Learning, agrees José Vicente (University of Lisbon), the principal investigator of one of the fifteen projects. “As a very complex system, it has many open questions. We can already address those with today’s large amounts of experimental data and realistic numerical simulations of the key physics, but not all of them — that is the gap that Artificial Intelligence may help close.”

Artificial Intelligence for Fusion projects

Following a call and selection process, the EUROfusion General Assembly approved the support for 15 new Artificial Intelligence Fusion research projects. The strong response to the call highlights the scientific community’s commitment to using state-of-the-art approaches to advance computational techniques for magnetically confined plasmas.

The 15 projects will receive a total amount of €2.659 million, of which half is provided by collaborative co-funding from the researchers’ home institutes and half from EUROfusion. The research projects will run for a period of two years.

Different methods to estimate the group delay for different radar frequencies in reflectometry data from a fusion plasma. The anticipated machine learning method (ML) outperforms traditional analysis methods (MP, BP). Credit: J.Vicente and J.Santos, IST, University of Lisbon

Quotes from participating researchers

“Our project uses probability theory and machine learning to understand the conditions for so-called ‘hybrid scenarios’. These scenarios offer an improved energy confinement in fusion machines, but the necessary conditions change between machines with different characteristics. If successful, our approach will let us find the underlying patterns and even predict the optimal conditions for future facilities like ITER and the power plants that follow.”

—Professor Geert Verdoolaege, department of Applied Physics, Ghent University, Belgium Identification and confinement scaling of hybrid scenarios across multiple devices

“This project is about using machine learning to get faster simulations of the stability of the edge plasma in tokamaks, the ‘pedestal’ region. These kinds of calculations can already be done, but they are computationally heavy. Our goal is to substantially accelerate these tools so they can be used in fast everyday data analysis as well as in real-time applications like control.”

—Dr Aaro Järvinen at VTT, Finland Machine learning accelerated pedestal MHD stability simulations

“We will investigate the application of deep learning techniques to improve the reconstruction of signals detected with radar-like instruments in fusion devices. This is important because it will allow better and automatic control of the fusion plasma that needs to be efficiently confined in those devices and in future fusion reactors.”

—Dr José Vicente at IST, University of Lisbon, Portugal Deep Learning for Spectrogram Analysis of Reflectometry Data

These projects underscore the potential of Artificial Intelligence and Machine Learning to address key challenges in fusion research, paving the way for more efficient and effective control strategies as we move closer to realizing fusion energy.

Using machine learning to get faster simulations of the stability of the edge plasma ('pedestal') in tokamaks. Credit: A. Järvinen et al., VTT

Supported projects

David Zarzoso ( CEA / CNRS, France) Artificial Intelligence augmented Scrape Off Layer modelling for capturing impact of filaments on transport and PWI in mean field codes simulations.

Feda Almuhisen (CEA / Aix-Marseille Université, France) Towards Tokamak operations Conversational Artificial Intelligence Interface Using Multimodal Large Language Models

Augusto Pereira (CIEMAT, Spain) Testing cutting-edge Artificial Intelligence research to increase pattern recognition and image classification in nuclear fusion databases

Sven Wiesen ( DIFFER , the Netherlands) Machine learning accelerated Scrape Off Layer L simulations: SOLPS-NN

Gergő Pokol (EK-CER, Hungary) Fast inference methods of advanced diagnostics for real-time control

Riccardo Rossi (ENEA / Università di Roma Tor Vergata, Italy) Artificial Intelligence-assisted Causality Detection and Modelling of Plasma Instabilities for Tokamak Disruption Prediction and Control

Michela Gelfusa (ENEA / Università di Roma Tor Vergata, Italy) Development of Physics Informed Neural Networks (PINNs) for Modelling and Prediction of Data in the Form of Time Series

Alessandro Pau (EPFL, Switzerland) Artificial Intelligence-assisted Plasma State Monitoring for Control and Disruption-free Operations in Tokamaks

Pawel Gasior (IPPLM, Poland) Laser Induced Breakdown Spectrocopy data-processing with Deep Neural Networks and Convolutional Neural Networks for chemical composition quantification in the wall of the next step-fusion reactors

Jose Vicente (IST, Portugal) Deep Learning for Spectrogram Analysis of Reflectometry Data

Geert Verdoolaege (LPP-ERM-KMS / Ghent University, Belgium) Identification and confinement scaling of hybrid scenarios across multiple devices

Marcin Jakubowski (IPP, Germany) Leveraging Generative Artificial Intelligence Models for Thermal Load Control in High-Performance Steady-State Operation of Fusion Devices

Daniel Böckenhoff (IPP, Germany) Surrogate modelling of ray-tracing and radiation transport code for faster real-time plasma profile inference in a magnetic confinement device

Antti Snicker (VTT, Finland) Applying Artificial Intelligence/Machine Learning for Neutral Beam Injection ionization and slowing-down simulations using ASCOT/BBNBI

Aaro Järvinen (VTT, Finland) Machine learning accelerated pedestal Magneto Hydro Dynamics stability simulations

About EUROfusion

The EUROfusion consortium coordinates experts, students and facilities from across Europe to advance fusion energy in line with the EUROfusion fusion roadmap . Co-funded through the Euratom Research and Training Programme, EUROfusion supports the preparation for experiments at ITER and the development of the European demonstration fusion power plant DEMO . The programme also fosters fusion education, training, and industry collaboration.

You may also enjoy these articles:

2nd jt-60sa international fusion school in 2024.

  • September 11, 2024

Cycling tour Ride4Fusion promotes fusion energy

  • September 6, 2024

Ploughing through clouds of electrons

  • September 5, 2024

IMAGES

  1. AI Experiments That You Can Try at Home

    ai experiments

  2. Infinite possibilities

    ai experiments

  3. Google AI Experiments : Neural Networks In A Fun Way

    ai experiments

  4. Google's AI Experiments works as a machine learning course

    ai experiments

  5. Google's AI Experiments help you understand neural networks by playing

    ai experiments

  6. 5 Best Google AI Experiments to Explore Artificial Intelligence

    ai experiments

VIDEO

  1. Amazing science experiments🔥🤯/#facts #science

  2. Ai Experiments Just for fun 😂🙏😂

  3. Ai Experiments Just for fun

  4. Ai Experiments just for fun

  5. ai experiments part 2 #shorts #ai

  6. Ai experiments just for fun #funny #hiphop

COMMENTS

  1. Experiments with Google

    Explore thousands of experiments that showcase the possibilities of building with AI, AR, VR, and more. Learn from collections, tutorials, and stories of coders who created amazing projects with Google tools and resources.

  2. Semantris by Google AI

    Semantris is an experiment that lets you play with semantic search and word association. Enter a clue and see the AI choose the most related words from a set of letters.

  3. Labs.google

    LABS.GOOGLE showcases Google's AI experiments in various domains, such as text, image, audio, video, and education. You can try out the latest AI tools, learn about AI topics, and explore AI collaborations with innovators.

  4. All Experiments

    Explore 1613 experiments that push the boundaries of art, technology, design and culture with Google AI and other technologies. Learn, play, create and interact with projects by artists, researchers and coders.

  5. 15 Interesting AI Experiments You Can Try Online

    Learn what artificial intelligence is and how it works by playing with these online AI experiments. From creating realistic people and images to composing soundtracks and playing games, you'll see how AI can mimic human intelligence.

  6. AI Experiments

    AI Experiments is a collection of simple and fun projects that let you interact with machine learning models through pictures, drawings, language, music, and more. You can also create your own experiments with Teachable Machine, a tool that lets you train your own AI models without coding.

  7. AI Experiments: Explore the possibilities of AI

    AI Experiments showcases the potential of AI through various experiments, such as emoji picker, explain like I am 5, and summary. You can explore, vote, and learn from these experiments and access resources and tutorials to experiment with AI yourself.

  8. Search Labs

    Try new AI features in Search, such as Viola the Bird, Say What You See, and Talk to a Live Representative. Learn more about AI topics and how to use them in faster, easier ways.

  9. All Collections

    Chrome Experiments. Since 2009, coders have created thousands of amazing experiments using Chrome, Android, AI, WebVR, AR and more. We're showcasing projects here, along with helpful tools and resources, to inspire others to create new experiments.

  10. LABS.GOOGLE

    LABS.GOOGLE showcases Google's bold and responsible AI projects where you can see and shape the latest tools and experiments. Join the Labs.google community to try new products, collaborate with visionaries and give feedback to the teams who build them.

  11. AI Test Kitchen

    AI Test Kitchen is a platform where you can explore and remix different types of media with AI tools. You can create videos, images, music, text, and more with AI-powered generative models and collaborations.

  12. AI Overviews and more

    Try the latest AI experiments in Search, and get AI Overviews on more topics in Labs. Learn how Google uses data, feedback, and human review to develop and improve generative AI in Search.

  13. Lab Sessions: A new series of experimental AI collaborations

    The final result is an experiment called TextFX, where you'll find 10 AI-powered tools for writers, rappers and wordsmiths of all kinds. Give it a try and if you want to take a closer look into how the experiment was built, check out this blog post or the open-sourced code .

  14. Teachable Machine

    Teachable Machine is a web-based tool that lets you train a computer to recognize your own images, sounds, and poses. You can use files or capture examples live, export your model for your projects, and learn from tutorials and examples.

  15. AIY Projects

    Build intelligent systems that see, speak, and understand with AIY Projects. Learn about AI and solve problems with Vision Kit, Voice Kit, and Google Assistant.

  16. AI + Writing

    AI + Writing. These experiments set out to explore whether machine learning could be used by writers to inspire, unblock and enrich their process. Read the blog post about the experiment and how to build machine learning tools for writers.

  17. Quick, Draw! by Google Creative Lab

    Collection: AI Experiments. This is a game built with machine learning. You draw, and a neural network tries to guess what you're drawing. Of course, it doesn't always work. But the more you play with it, the more it will learn. It's just one example of how you can use machine learning in fun ways. Built by Jonas Jongejan, Henry Rowley ...

  18. 15 Artificial Intelligence Tools to Experiment with at Home

    Learn how to use AI for writing, research, entertainment, and home help with these platforms and devices. From chatbots and image generators to smart goggles and mattresses, explore the possibilities of artificial intelligence at home.

  19. CRISPR CREME: An AI Treat to Enable Virtual Genomic Experiments

    And the scale of experiments that we performed is unprecedented—hundreds of thousands of perturbation experiments." Koo and his team tested CREME on another AI-powered DNN genome analysis tool ...

  20. Home

    Welcome to our AI Experiments page, showcasing a curated collection of popular and ongoing experiments by tech giants like Google, Microsoft, and more. Witness the groundbreaking work shaping the future of artificial intelligence as you explore interactive projects, creative applications, and cutting-edge experiments.

  21. AI Playground Quick Start Guide

    About the AI Playground . The Stanford AI Playground is a user-friendly platform, built on open-source technologies, that allows you to safely try various AI models from vendors like OpenAI, Google, and Anthropic in one spot. The AI Playground is being managed by University IT (UIT) as a pilot for Stanford faculty and staff.

  22. Oak AI Experiments

    How our AI works. Oak AI Experiments explores ways that large language models (LLMs) can generate effective teaching resources and reduce workloads. We do this by using a combination of carefully chosen prompts - instructions aimed at getting useful responses - and our existing high-quality content. Aila currently has a 9000-word prompt to ...

  23. Eureka: Evaluating and understanding progress in AI

    In the fast-paced progress of AI, the question of how to evaluate and understand capabilities of state-of-the-art models is timelier than ever. New and capable models are being released frequently, and each release promises the next big leap in frontiers of intelligence. Yet, as researchers and developers, often we ask ourselves: Are these models all […]

  24. Beat Blender

    Beat Blender. Blend beats using machine learning to create music in a fun new way. LOADING... Built with TensorFlow.js. Learn how it works. Built using Magenta.

  25. AI models mimic fruit fly brains : Shots

    The model is based on a fruit fly's visual system, and it offers scientists a way to quickly test ideas on a computer before investing weeks or months in experiments involving actual flies or ...

  26. Want to experiment with AI features in Microsoft Teams? Join the Teams

    Microsoft Teams Premium is an add-on license that provides access to advanced features and capabilities within Microsoft Teams, including enhanced artificial intelligence integration. Key features of Teams Premium include: AI-Powered Meetings: Automatically generate comprehensive meeting notes and follow-up tasks with the power of AI, capturing the most important information even if you miss ...

  27. A.I. DUET

    A.I. DUET is an interactive experiment by Google that allows users to create music in collaboration with artificial intelligence.

  28. EUROfusion spearheads advances in Artificial Intelligence and Machine

    By launching 15 new research projects, EUROfusion is engaging data science experts across Europe to apply Artificial Intelligence and Machine Learning techniques to fusion energy. These projects will leverage the world's largest and most diverse dataset of fusion experiments to identify optimal methods for understanding and controling the fusion process, ultimately shortening the road to ...