The Chinese tech giant reportedly announced that its two new models, Qwen-VL and Qwen-VL-Chat, will be open source, allowing researchers, academics, and companies around the world to use them to create their own AI apps without having to train their own systems, saving time and money.
Alibaba has released a new artificial intelligence model that it claims can recognise photographs and carry out more complex dialogues than the company’s prior offerings. According to Alibaba, Qwen-VL can generate descriptions for photographs and answer free-form questions about them. This comes as competition for technological dominance heats up throughout the world.
On the other hand, Qwen-VL-Chat is designed for more “complex interaction,” as described by Alibaba, such as comparing numerous image inputs and fielding several queries. Alibaba claims that Qwen-VL-Chat can do things like solve mathematical equations given in a picture, write stories based on user input, and generate graphics from photos. Alibaba provided a sample input depicting a Chinese language hospital sign. The AI can read the sign and provide directions to the requested healthcare department.
The majority of work in generative AI, where computers are taught to respond to specific stimuli provided by humans, has thus far concentrated on text answers. Like Qwen-VL-Chat, the newest version of OpenAI’s ChatGPT can interpret photos and provide textual responses. Both of Alibaba’s newest models are extensions of the Tongyi Qianwen, a huge language model that was launched earlier this year. Chatbots rely on LLMs, which are AI models trained on massive datasets.
This month, the startup, which has its headquarters in Hangzhou, open sourced two further AI models. Alibaba’s cloud division is aiming to restart growth as the company prepares to go public, and the open-source distribution will help the company gain more customers for its AI model at no cost to the company in the form of licensing fees.
News from enegxinews