The industry of artificial intelligence (AI) is rapidly changing and evolving. We witness some stunning advancements and innovative products that push the boundaries of what we thought was possible.

We have delivered some impressive and cutting-edge products that are set to revolutionize various sectors and enhance personal productivity this week.

OpenAI has officially launched the ChatGPT web search feature, which can quickly and in real-time retrieve search results with relevant web links. you don’t need to use other web extensions or apps to achieve this feature anymore.

All search answers also include source citations. ChatGPT will also provide more in-depth answers by incorporating context into search queries.

Additionally, by downloading the Chrome browser extension, ChatGPT can be set as the default search engine. This way, you’ll be able to use it quickly and directly!

How to use the ChatGPT web search feature if you are not a ChatGPT Plus subscriber

OpenAI has announced that ChatGPT Search is accessible only to ChatGPT Plus and Team users, who can use it immediately. No paid users need to wait some time.

However, OpenAI plans to roll out access to free users over the coming months

Wait for Rollout: If you are not a paid subscriber, you must wait for OpenAI to extend the feature to free users, which is expected in the upcoming months.

Join the Waitlist: If available, consider joining any waitlist that OpenAI may provide for early access to features like ChatGPT Search.

Check Regularly: Keep an eye on announcements from OpenAI regarding updates on when the search feature will be available for free users.

How does ChatGPT ensure the accuracy of its web search results?

ChatGPT ensures the accuracy of its search results through several methods:

Real-Time Data Retrieval: It accesses up-to-date information through search engines, gathering content from reputable sources and citing them.

Selection Criteria: ChatGPT prioritizes relevant, authoritative, and recent sources to match user queries.
Search Refinement: If initial results are lacking, it refines queries to improve relevance.
User Feedback: It adapts based on user corrections to improve responses over time.

HeyGen launched a new feature to create digital humans

HeyGen has launched a new feature that allows users to create digital humans using only photos without filming.

You can upload your own photo or enter a text prompt to generate a virtual character image, which can be used to train your own AI video digital avatar.

The more reference images you upload of the character, the greater the consistency of facial features in the generated images.

These digital humans have natural body movements, customizable clothing, poses, and interchangeable backgrounds, and you can select gender, age, and ethnicity.

You can edit the script, choose different voices and emotional expressions, and quickly generate the video in a short time.

Suno has launched Personas

Suno has launched a new feature called Personas.

It allows users to save the core characteristics of a song, such as vocals, style, and atmosphere, which can then be reused in new creations.

This feature is designed to help you maintain your unique musical style.

How to create a Persona: Choose a song you like, click “Create,” and then make a Persona. Add lyrics and style: Users can add lyrics and style just like in regular creations.

Public and private settings: You can choose to set a Persona as public or private. Public Personas will have their own page, can be used by other users, and will appear in your library and personal profile.

GitHub introduced more AI models into GitHub Copilot

GitHub announced it is introducing more AI models into GitHub Copilot to enhance developers’ options and customization capabilities.

The new models include:

Claude 3.5 Sonnet
Gemini 1.5 Pro
o1-preview and o1-mini

GitHub has also launched GitHub Spark, a tool for building applications entirely with natural language.

You don’t need to know complex deployment techniques such as configuring servers or databases.

GitHub Spark will automatically complete all cloud setup and resource allocation in the background, enabling even beginners to create web applications entirely through natural language.

In other words, you just need to tell it “what you want to do,” and it will provide you with a functional app, making the process as simple as a conversation.

Stability AI has released the Stable Diffusion 3.5 Medium model.

It is available for free for both commercial and non-commercial users. With a 2.5 billion parameters model, it is specifically designed for consumer hardware.

The model requires only 9.9 GB of VRAM. It can run on most standard graphics cards with consumer hardware.

It can generate high-quality images at multiple resolutions, producing results superior to other medium-sized models.

According to Stability AI analysis, Stable Diffusion 3.5 Large leads the market in prompt adherence and rivals much larger models in image quality.

How to use the Stable Diffusion 3.5?

Installation Steps for Local Use


Now, you can download the Stable Diffusion 3.5 Large and Stable Diffusion 3.5 Large Turbo models from Hugging Face and the inference code on GitHub. And then you can run it on your computer or other hardware.

Before you can run it on the computer. You should set up the prerequisites and install essential libraries.

While you install Required Python Libraries, you can run Stable Diffusion Locally.

Online access

Using ComfyUI

ComfyUI offers a user-friendly interface to manage your workflows with Stable Diffusion 3.5. You can drag different files into the interface and run image generation tasks easily.

Using Hugging Face Spaces

Visit Hugging Face Spaces to run Stable Diffusion models directly in your browser without installation requirements.

For more users, Many apps and websites will soon integrate the latest Stable Diffusion 3.5 model for image generation, so stay tuned for updates!

Ultralight-Digital-Human: an ultra-lightweight digital human model can run on a mobile phone

Ultralight-Digital-Human: an ultra-lightweight digital human model that supports real-time operation on mobile devices.

The model’s algorithm is optimized to run smoothly, even on low-power devices.
Only 3 to 5 minutes of video is needed to complete the training.

Make sure that every frame of the video has the person’s full face exposed and that the sound is clear without any noise. Put it in a new folder.

Additionally, through optimized data flow and inference processes, the model can process input data (such as video and audio) in real-time, enabling instant digital human responses.

The AI video platform D-ID launched new digital human tools


The AI video platform D-ID has launched two new digital human tools—Express and Premium+, specifically designed for content creation, aimed at enabling businesses to apply more realistic virtual humans in areas such as marketing, sales, and customer support.

The Express virtual human requires only one minute of video training to be generated and can synchronize with the user’s head movements.

The Premium+ virtual human requires a longer training video but can perform hand and torso movements, creating more realistic human interactions.

These tools make it easier to generate virtual human videos, reducing business costs in marketing and offering broader applicability.

Google Gemini API has introduced “Search Anchoring”

Google launched the new feature “Google Search Anchoring” in its Gemini API and Google AI Studio. it is very convenient to use.

This feature leverages real-time data from Google Search to provide users with more accurate and up-to-date information, along with supporting links and search suggestions, making AI responses more reliable.

Utilizing the latest news data obtained through searches reduces misinformation.
Real-time search fetches the latest information, allowing for better answers in certain search results.

Links to information sources are included in the answers, making it easier for users to verify the credibility of the information.

Claude for Desktop

AnthropicAI has built a Claude desktop app! Now you can be available on Mac and Windows.

As your AI assistant, Claude can help you perform deeper work more quickly and creatively.

You can now use Claude on any device, converse with Claude, and have Claude help you find answers to questions and analyze the content of images.

Claude has learned to understand charts and graphs in PDFs!


The Anthropic was rolling out the ability to send Claude PDFs in the Anthropic API.
With their new PDF support beta, you can directly include a PDF in your API request. Reading research papers is now easier.

The new Claude 3.5 Sonnet model now supports PDF input and understands both text and visual content within documents.

You can experience this feature in the feature preview.

You can ask any specific questions you want about the content in the PDF, and Claude can answer your questions based on its image-reading capability.

How does PDF support work?

  • The system will convert each page of the PDF into an image.
  • The system gains a better understanding of the PDF by analyzing text and images.
  • Other Claude features can be used simultaneously.

How can I enable to use the Visual PDF feature in Claude

To enable the Visual PDFs feature in Claude, follow these steps:

  1. Access the Settings:
    Open the Claude interface and look for a flask icon or a settings menu.
  2. Enable Visual PDFs:
    Click on the flask icon, navigate to the Visual PDFs option and toggle it on. This will allow Claude to process and interpret images and visual elements within PDF documents.
  3. Upload Your PDF:
    Once the feature is enabled, you can upload a PDF document by dragging it into the chat window or using the upload button. it is very easy and convenient.
  4. Interact with Claude:
    After uploading the PDF files, you can ask Claude any questions about both the text and images contained in the PDF, enhancing your interaction with complex documents. This will help you read documents more conveniently and boost your efficiency!

Ähnliche Beiträge