• Nex AI News
  • Posts
  • OpenAI launches GPT Store for custom AI assistants

OpenAI launches GPT Store for custom AI assistants

Nex AI - News

OpenAI launches GPT Store for custom AI assistants

Friday - January 12 - 2024 

In this week’s newsletter :

  • OpenAI launches GPT Store for custom AI assistants

  • OpenAI’s GPT Store to launch next week after delays

  • Study pinpoints the weaknesses in AI

  • Scientists identify security flaw in AI query models

  • Experts Warn Congress of Dangers AI Poses to Journalism

HEADLINES

OpenAI launches GPT Store for custom AI assistants
Summary:
OpenAI has launched its new GPT Store providing users with access to custom AI assistants.

Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store.

The store features assistants focused on a wide range of topics including art, research, programming, education, lifestyle, and more. OpenAI is highlighting assistants it deems most useful, including:

  • Personal trail recommendations from AllTrails

  • Searching academic papers with Consensus

  • Expanding coding skills via Khan Academy’s Code Tutor

  • Designing presentations with Canva, book recommendations from Books

  • Maths help from CK-12 Flexi

OpenAI’s GPT Store to launch next week after delays
Summary:
OpenAI has announced that its GPT Store, a platform where users can sell and share custom AI agents created using OpenAI’s GPT-4 large language model, will finally launch next week.

An email was sent to individuals enrolled as GPT Builders that urges them to ensure their GPT creations align with brand guidelines and advises them to make their models public:

The GPT Store was unveiled at OpenAI’s November developers conference, revealing the company’s plan to enable users to build AI agents using the

powerful GPT-4 model. This feature is exclusively available to ChatGPT Plus and enterprise subscribers, empowering individuals to craft personalised versions of ChatGPT-style chatbots.

The upcoming store allows users to share and monetise their GPTs. OpenAI envisions compensating GPT creators based on the usage of their AI agents on the platform, although detailed information about the payment structure is yet to be disclosed.

Study pinpoints the weaknesses in AI
Summary:
ChatGPT and other solutions built on machine learning are surging. But even the most successful algorithms have limitations. Researchers from University of

Copenhagen have proven mathematically that apart from simple problems it is not possible to create algorithms for AI that will always be stable. The study,

posted to the arXiv preprint server, may lead to guidelines on how to better test algorithms and reminds us that machines do not have human intelligence after all.

Machines interpret medical scanning images more accurately than doctors, they translate foreign languages, and may soon be able to drive cars more safely than

humans. However, even the best algorithms do have weaknesses. A research team at Department of Computer Science, University of Copenhagen, tries to reveal them.

Take an automated vehicle reading a road sign as an example. If someone has placed a sticker on the sign, this will not distract a human driver. But a machine

may easily be put off because the sign is now different from the ones it was trained on.

"We would like algorithms to be stable in the sense, that if the input is changed slightly the output will remain almost the same. Real life involves all kinds of

noise which humans are used to ignore, while machines can get confused," says Professor Amir Yehudayoff, heading the group.

Scientists identify security flaw in AI query models
Summary:
Scientists identify security flaw in AI query models

UC Riverside computer scientists have identified a security flaw in vision language artificial intelligence (AI) models that can allow bad actors to use AI for nefarious purposes, such as obtaining instructions on how to make bomb.

When integrated with models like Google Bard and Chat GPT, vision language models allow users to make inquiries with both images and text.

The Bourns College of Engineering scientists demonstrated a "jailbreak" hack by manipulating the operations of Large Language Model or LLM, software programs, which are essentially the foundation of query-and-answer AI programs.

The paper's title is "Jailbreak in Pieces: Compositional Adversarial Attacks on Multi-Modal Language Models." It has been submitted for publication by the International Conference on Learning Representations and is available on the arXiv preprint server.

These AI programs give users detailed answers to just about any question recalling stored knowledge learned from vast amounts of information sourced

from the Internet. For example, ask Chat GPT, "How do I grow tomatoes?" and it will respond with step-by-step instructions, starting with the selection of seeds.

But ask the same model how to do something harmful or illegal, such as "How do I make methamphetamine?" and the model would normally refuse, providing a generic response such as "I can't help with that."

Yet, UCR assistant professor Yue Dong and her colleagues found ways to trick AI language models, especially LLMs, to answer nefarious questions with detailed answers that might be learned from data gathered from the dark web.

Experts Warn Congress of Dangers AI Poses to Journalism
Summary:
AI poses a grave threat to journalism, experts warned Congress at a hearing on Wednesday.

Media executives and academic experts testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law about how AI is contributing

to the big tech-fueled decline of journalism. They also talked about intellectual property issues arising from AI models being trained on the work of journalists,

and raised alarms about the increasing dangers of AI-powered misinformation.

“The rise of big tech has been directly responsible for the decline in local news,” said Senator Richard Blumenthal, a Connecticut Democrat and chair of the

subcommittee. “First, Meta, Google and OpenAI are using the hard work of newspapers and authors to train their AI models without compensation or credit.

Adding insult to injury, those models are then used to compete with newspapers and broadcasters, cannibalizing readership and revenue from the journalistic institutions that generate the content in the first place.”

TOOLS

EDUTAINMENT

Gemini is built from the ground up for multimodality — reasoning seamlessly across text, images, video, audio, and code.

Daily sports picks analyzed by artificial intelligence

SafeBet.ai analyzes thousands of games in 6 different sports into minor details. Our AI is trained to make accurate projections for future games.

THAT’S ALL PEEPS

Did you find our newsletter/information enjoyable? Share the love with your friends or subscribe for more delightful updates!

See you next week !

What did you think about today's edition?

Login or Subscribe to participate in polls.

Reply

or to participate.