2.10.2023
This month's exploration into artificial intelligence showcases Germany's ambitious AI vision, OpenAI's game-changing image generator, and the delicate balance of AI in politics, alongside revelations and debates in AI advancements across sectors like art, economy, medicine, and more.
Cedric May
Chief Technology Officer
Dive into this month's deep dive on the unprecedented leaps in artificial intelligence, spanning sectors from economy and medicine to visual arts. As Germany gears up to become a frontrunner in AI investment, Zoom and OpenAI unveil their innovative renditions to redefine the AI landscape. While the tech universe bursts with ground-breaking revelations, controversies surrounding AI’s role in politics brew beneath the surface.
Dive deeper into:
Brace yourself for a roller-coaster ride through the multifaceted domain of artificial intelligence. If you're as excited about the future as we are, you won't want to miss a moment of this edition!
Germany plans to double its public funding for artificial intelligence research to almost a billion euros over the next two years. The country is looking to close the skills gap with AI leaders China and the United States. The move comes as Germany tries to revive its economy from a recession while facing stiff competition from electric vehicle makers and high energy costs. Germany aims to create 150 new university labs for AI research, expand data centers, and make accessible complex public data sets to AI techniques. Even though private AI spending in the US reached $47.4 billion in 2022, almost double Europe's total spending, Germany believes that its emerging regulatory framework could attract players to the country. Simplifying regulations would help promote private research spending, according to Stark-Watzinger. Germany has doubled its number of AI startups in 2023 but is still in ninth place globally.
To stay competitive in videoconferencing, Zoom is rebranding its AI features, including the AI assistant formerly known as Zoom IQ. This comes after controversy over changes to Zoom's terms of service that implied the use of customers' videos to train its AI tools and models. The new AI Companion includes a ChatGPT-like bot and real-time feedback on user presence in meetings, among other capabilities. The features will only be available to paying Zoom customers.
OpenAI has announced DALL-E 3, the latest version of its AI image generation model. DALL-E 3 integrates with ChatGPT and can generate images based on complex descriptions and in-image text. It will be available to ChatGPT Plus and Enterprise customers in early October. DALL-E 3 is a significant improvement over previous models, with better detail and prompt fidelity. It does not require prompt engineering and can generate images with minimal deformations. It also handles text within images more effectively. DALL-E 3 has been built natively on ChatGPT, allowing for conversational refinements to images. OpenAI has implemented filters to limit the generation of violent, sexual, or hateful content. The generated images belong to the user and can be used without permission from OpenAI. DALL-E 3 is currently undergoing closed testing and will be available via the API in October.
Moderna CEO Stéphane Bancel believes that AI has the potential to transform the pharmaceutical industry. The company has been using AI for at least six years to create new drugs, including the COVID-19 vaccine. It has also launched an AI academy to train its employees, boosting productivity. Moderna is planning a second Phase 3 trial of its mRNA-based therapy for a certain type of lung cancer with Merck. Bancel believes that AI could help find a cure for cancer by re-educating the immune system to recognize cancer cells. He also discussed how Moderna is using AI to review contracts and answer questions from regulators, but noted that some employees need to be trained in data science.
A research team from Monell Chemical Senses Center and start-up Osmo used machine learning to explore how chemicals connect to odor perception in the brain. Their AI model predicted odor descriptors based on a molecule's structure using an industry dataset of 5,000 known odorants. The model outperformed human assessments for 53% of the tested molecules, even predicting odor strength. It could also identify structurally dissimilar molecules with similar smells and characterize 500,000 potential scent molecules.
This technology has significant potential for the fragrance and flavor industries. It could identify new odors to reduce dependence on endangered plants and create new functional scents for uses such as mosquito repellent or malodor masking. The researchers hope that this map will be useful for investigating the nature of olfactory sensation in chemistry, olfactory neuroscience, and psychophysics.
Morgan Stanley projects that the gig economy could expand to a staggering $1.4 trillion industry by 2030, primarily driven by generative AI technologies like ChatGPT. The bank's research suggests that gig workers' incomes could increase by $83 billion in the base-case scenario and up to $300 billion in a more optimistic bull-case scenario, with real-world examples illustrating how AI is already empowering individuals to earn more through side hustles. This prediction follows Morgan Stanley's previous endorsement of AI's transformative potential, which included a forecast of $6 trillion in investment opportunities across various sectors.
X's updated privacy policy now confirms that it will collect biometric data, job, and education history from its users to train its machine learning and AI models. The company also plans to use the information it collects and other publicly available information to help train its machine learning and AI models. This change has led to speculation that X owner Elon Musk intends to use X as a source of data for his AI company, xAI. Musk has previously stated that xAI would use public tweets to train its AI models.
Tech leaders including Sundar Pichai, Elon Musk, Mark Zuckerberg, and Sam Altman held a closed-door meeting with US senators to discuss regulating AI. The meeting, called an "AI safety forum", aimed to address the rapid pace of AI advancements that are already affecting people's lives and work, with a focus on US elections and China's advancements. Attendees discussed the need for government oversight, an independent agency to oversee certain aspects of AI, more transparency from companies, and steps to protect the 2024 US elections from disinformation becoming supercharged by AI.
Ex-Apple designer Jony Ive and OpenAI CEO Sam Altman are reportedly collaborating on a new AI device, although specific details about the project remain undisclosed. Speculation abounds, with some suggesting it could be a reimagined smartphone heavily reliant on generative AI, while others believe it might involve an AI-native operating system. The nature of the device, whether it's an OpenAI product or developed by another company, is uncertain. Masayoshi Son, CEO of SoftBank, has also been in talks about the project. While the specifics are unclear, the news has generated significant interest and speculation within the tech industry.
ChatGPT is considered part of the “robot revolution” by some tech experts. However, most Americans have not used it, and only a small percentage think it will have a significant impact on their jobs. According to a recent Pew Research Center survey conducted from July 17-23, only 24% of Americans who have heard of ChatGPT have used it, which is equivalent to 18% of U.S. adults overall.
The survey found that younger adults, college-educated individuals, and men are more likely to have used ChatGPT. The chatbot has been used mainly for entertainment and learning, and fewer people have used it for work-related tasks.
While about half or more of those who have heard of ChatGPT think chatbots will have a major impact on certain jobs like software engineers, graphic designers, and journalists over the next 20 years, only 19% of employed adults who have heard of ChatGPT think chatbots will have a major impact on their job. Additionally, most employed Americans do not believe that chatbots will be helpful for their job.
Despite OpenAI's policy changes in March aimed at preventing the misuse of ChatGPT for political messaging, an investigation by The Washington Post reveals that the AI chatbot is still susceptible to such manipulation, posing potential risks for the 2024 election cycle. OpenAI's usage policy explicitly prohibits political campaigning, except for grassroots advocacy campaigns. The company had promised to develop a machine learning classifier to identify and flag election-related content, but this hasn't been effectively enforced.
OpenAI acknowledges the challenge of creating nuanced rules that avoid unintentionally blocking legitimate content. The issue of AI content moderation and access control is becoming increasingly complex, similar to the challenges faced by social media platforms. In response, OpenAI is working on implementing a scalable content moderation system.
Regulatory efforts are gaining momentum, with US Senators proposing the No Section 230 Immunity for AI Act to prevent AI-generated content from being protected under Section 230. The Biden administration is actively focusing on AI regulation, investing in research institutes and seeking industry commitments to responsible AI development. Furthermore, the FTC is investigating OpenAI's policies to ensure consumer protection.
Dive into this month's deep dive on the unprecedented leaps in artificial intelligence, spanning sectors from economy and medicine to visual arts. As Germany gears up to become a frontrunner in AI investment, Zoom and OpenAI unveil their innovative renditions to redefine the AI landscape. While the tech universe bursts with ground-breaking revelations, controversies surrounding AI’s role in politics brew beneath the surface.
Dive deeper into:
Brace yourself for a roller-coaster ride through the multifaceted domain of artificial intelligence. If you're as excited about the future as we are, you won't want to miss a moment of this edition!
Germany plans to double its public funding for artificial intelligence research to almost a billion euros over the next two years. The country is looking to close the skills gap with AI leaders China and the United States. The move comes as Germany tries to revive its economy from a recession while facing stiff competition from electric vehicle makers and high energy costs. Germany aims to create 150 new university labs for AI research, expand data centers, and make accessible complex public data sets to AI techniques. Even though private AI spending in the US reached $47.4 billion in 2022, almost double Europe's total spending, Germany believes that its emerging regulatory framework could attract players to the country. Simplifying regulations would help promote private research spending, according to Stark-Watzinger. Germany has doubled its number of AI startups in 2023 but is still in ninth place globally.
To stay competitive in videoconferencing, Zoom is rebranding its AI features, including the AI assistant formerly known as Zoom IQ. This comes after controversy over changes to Zoom's terms of service that implied the use of customers' videos to train its AI tools and models. The new AI Companion includes a ChatGPT-like bot and real-time feedback on user presence in meetings, among other capabilities. The features will only be available to paying Zoom customers.
OpenAI has announced DALL-E 3, the latest version of its AI image generation model. DALL-E 3 integrates with ChatGPT and can generate images based on complex descriptions and in-image text. It will be available to ChatGPT Plus and Enterprise customers in early October. DALL-E 3 is a significant improvement over previous models, with better detail and prompt fidelity. It does not require prompt engineering and can generate images with minimal deformations. It also handles text within images more effectively. DALL-E 3 has been built natively on ChatGPT, allowing for conversational refinements to images. OpenAI has implemented filters to limit the generation of violent, sexual, or hateful content. The generated images belong to the user and can be used without permission from OpenAI. DALL-E 3 is currently undergoing closed testing and will be available via the API in October.
Moderna CEO Stéphane Bancel believes that AI has the potential to transform the pharmaceutical industry. The company has been using AI for at least six years to create new drugs, including the COVID-19 vaccine. It has also launched an AI academy to train its employees, boosting productivity. Moderna is planning a second Phase 3 trial of its mRNA-based therapy for a certain type of lung cancer with Merck. Bancel believes that AI could help find a cure for cancer by re-educating the immune system to recognize cancer cells. He also discussed how Moderna is using AI to review contracts and answer questions from regulators, but noted that some employees need to be trained in data science.
A research team from Monell Chemical Senses Center and start-up Osmo used machine learning to explore how chemicals connect to odor perception in the brain. Their AI model predicted odor descriptors based on a molecule's structure using an industry dataset of 5,000 known odorants. The model outperformed human assessments for 53% of the tested molecules, even predicting odor strength. It could also identify structurally dissimilar molecules with similar smells and characterize 500,000 potential scent molecules.
This technology has significant potential for the fragrance and flavor industries. It could identify new odors to reduce dependence on endangered plants and create new functional scents for uses such as mosquito repellent or malodor masking. The researchers hope that this map will be useful for investigating the nature of olfactory sensation in chemistry, olfactory neuroscience, and psychophysics.
Morgan Stanley projects that the gig economy could expand to a staggering $1.4 trillion industry by 2030, primarily driven by generative AI technologies like ChatGPT. The bank's research suggests that gig workers' incomes could increase by $83 billion in the base-case scenario and up to $300 billion in a more optimistic bull-case scenario, with real-world examples illustrating how AI is already empowering individuals to earn more through side hustles. This prediction follows Morgan Stanley's previous endorsement of AI's transformative potential, which included a forecast of $6 trillion in investment opportunities across various sectors.
X's updated privacy policy now confirms that it will collect biometric data, job, and education history from its users to train its machine learning and AI models. The company also plans to use the information it collects and other publicly available information to help train its machine learning and AI models. This change has led to speculation that X owner Elon Musk intends to use X as a source of data for his AI company, xAI. Musk has previously stated that xAI would use public tweets to train its AI models.
Tech leaders including Sundar Pichai, Elon Musk, Mark Zuckerberg, and Sam Altman held a closed-door meeting with US senators to discuss regulating AI. The meeting, called an "AI safety forum", aimed to address the rapid pace of AI advancements that are already affecting people's lives and work, with a focus on US elections and China's advancements. Attendees discussed the need for government oversight, an independent agency to oversee certain aspects of AI, more transparency from companies, and steps to protect the 2024 US elections from disinformation becoming supercharged by AI.
Ex-Apple designer Jony Ive and OpenAI CEO Sam Altman are reportedly collaborating on a new AI device, although specific details about the project remain undisclosed. Speculation abounds, with some suggesting it could be a reimagined smartphone heavily reliant on generative AI, while others believe it might involve an AI-native operating system. The nature of the device, whether it's an OpenAI product or developed by another company, is uncertain. Masayoshi Son, CEO of SoftBank, has also been in talks about the project. While the specifics are unclear, the news has generated significant interest and speculation within the tech industry.
ChatGPT is considered part of the “robot revolution” by some tech experts. However, most Americans have not used it, and only a small percentage think it will have a significant impact on their jobs. According to a recent Pew Research Center survey conducted from July 17-23, only 24% of Americans who have heard of ChatGPT have used it, which is equivalent to 18% of U.S. adults overall.
The survey found that younger adults, college-educated individuals, and men are more likely to have used ChatGPT. The chatbot has been used mainly for entertainment and learning, and fewer people have used it for work-related tasks.
While about half or more of those who have heard of ChatGPT think chatbots will have a major impact on certain jobs like software engineers, graphic designers, and journalists over the next 20 years, only 19% of employed adults who have heard of ChatGPT think chatbots will have a major impact on their job. Additionally, most employed Americans do not believe that chatbots will be helpful for their job.
Despite OpenAI's policy changes in March aimed at preventing the misuse of ChatGPT for political messaging, an investigation by The Washington Post reveals that the AI chatbot is still susceptible to such manipulation, posing potential risks for the 2024 election cycle. OpenAI's usage policy explicitly prohibits political campaigning, except for grassroots advocacy campaigns. The company had promised to develop a machine learning classifier to identify and flag election-related content, but this hasn't been effectively enforced.
OpenAI acknowledges the challenge of creating nuanced rules that avoid unintentionally blocking legitimate content. The issue of AI content moderation and access control is becoming increasingly complex, similar to the challenges faced by social media platforms. In response, OpenAI is working on implementing a scalable content moderation system.
Regulatory efforts are gaining momentum, with US Senators proposing the No Section 230 Immunity for AI Act to prevent AI-generated content from being protected under Section 230. The Biden administration is actively focusing on AI regulation, investing in research institutes and seeking industry commitments to responsible AI development. Furthermore, the FTC is investigating OpenAI's policies to ensure consumer protection.
Read next
Convince yourself and redefine your success with little effort and resources for maximum results.