...

Snapchats New AI, Elon Musks New AI, GPT4, AutoGPT, , Facebooks New AI

 

So let’s take a look at everything that was released last week in AI, and boy, oh boy, pay attention because there is a lot of stuff that you missed. Coming in at number one, we have GPT-4’s multi-modal feature being, I guess you could say, expedited by Microsoft. Essentially, the researchers at Microsoft decided to release a visual chatbot where you can actually use chatbot to chat with certain images. Now, the videos that you’re seeing on the screen are essentially demos, and there is, of course, a research paper in which they describe exactly how they use chatbot to describe and chat, and of course, generate these images. Now, of course, you can actually use this if you want. The web page is still live, all you’re going to need is an OpenAI key, and once you simply put that in, you’re going to be able to simply use the web page and interact just like I did.

Bloomberg also released their large language model, which is going to be centered around Finance. Think of it like chatbot, but essentially specifically trained to look at Finance models. What they did was they trained this large language model on all the finance papers that they ever looked at, and essentially, the resulting language model that they got was one that can accurately predict and make certain guesses about market sentiments. Over time, they’re going to be improving this software by adding more and more parameters.

Next, we had Facebook releasing their new segment anything model, and essentially what this segment anything model does is they use AI to detect every single item or object that might be in an image. Now, this isn’t just insane because they’re detecting absolutely everything that’s in an image; they actually know what the item is in that specific image. So it’s an image identification and a classifier. For example, right here, you can see that it’s able to identify how many cats are in this entire image, which is going to be good as it has a variety of many different applications worldwide. They also showcased exactly how Facebook’s segment is going to be used in a real-world scenario. So right here, you can see that these AR goggles that are being deployed on this user are able to accurately identify whatever is in the field of view. This could be really useful for people who have trouble seeing things, such as those who are partially blind or those with visual impairments. It could also be helpful for quickly identifying objects or rare items. This technology opens up a world of many different applications and demonstrates how quickly it is being used and how it can be used for everyday scenarios, like sorting items or finding things quickly.

Moving on, Microsoft released their new software called Jarvis. Essentially, what they did was they combined chatbot with hugging GPT. With Jarvis, they managed to get chatbot to complete pretty much any tasks that a user can ask, even if it is multimodal. For example, the first request was, “Please generate an image where a girl is reading a book and then make sure her pose is the same as the image that I’ve just submitted. Describe this image with your voice.” You might be thinking, “How on Earth is chatbot or Jarvis going to do this because it doesn’t have a voice?” You can see right here that the four stages are commenced based on all the tools that Jarvis has access to. So we had stage one, which was planning. You can see right here that chatbot/Jarvis is managing to plan out exactly what it is going to do by selecting everything and the sequencing of orders. Then, of course, we have stage two, which is the model selection, where the Jarvis AI decides which specific model to use to complete the task. After stage two, we have stage three, which is the task execution, where it uses the AI software, whichever it may be, and then, of course, it has the response generation to generate the desired output. Definitely something that was really good, and of course, the final response was there, as you all can see on the screen. Jarvis proved to be a very good success, and it actually documents how it does this, so it definitely proved that Microsoft is getting better and better at utilizing AI to make different requests able to be done. As you can see right here, they’re able to identify exactly what is going on in the image. Remember, this isn’t chatbot 4; it was just something that is done with a lot of the tools online. Of course, you can actually use Jarvis if you want. You can go over to the website “hugging GPT,” and all you’re going to need is some API keys, and you’re going to be able to use that website. Bear in mind, it is a little bit buggy, but it is something that you can use.

Something that I thought was very, very interesting was the deployment of some robots to help patrol Times Square in New York. Take a look at this: “Announcing three new policing technologies in New York City: the K5 autonomous security robot, the Spot digital robot, and the Star Chase GPS attachment system. Our job is to fight crime and keep people safe, and these tools are significant steps forward in that vital mission.”

We also had someone showcase exactly what’s possible when you provide GPT-3 by Runway with a perfect driving image and a very good video reference. I do want to say shout-out to this person who created this because it was definitely very creative and just goes to show exactly what’s possible with a perfect combination of the artistic styles used to use text to video. It definitely shows what’s going to be capable in the future.

Then we had Amazon releasing their new software, which is called Bedrock, and essentially allows people from all around the world, businesses alike, to access a large model of AI software which they can easily use and fine-tune to their business-specific needs. Now, you’re about to see a clip of the CEO talking about why they launched this and how effective it’s going to be. “Companies don’t want to go through that, and so what they want to do is they want to work off of a foundational model that’s big and great already and then have the ability to customize it for their purposes. That’s what Bedrock is.”

On the Amazon Bedrock platform, you can see that you have text generation, chatbots, search, text summarization, image generation, and personalization. You have the choice of the foundation models that you can use, which will be available on the Amazon Bedrock platform, and you can then fine-tune it for your business’s specific needs.

Well, they also did release, as well, was Code Whisper, which is essentially an AI coding companion. It is going to help people get more done faster. So, definitely another area in which Amazon is deciding to innovate. Then, of course, we had Nvidia release its “Text to Video” paper, in which they showcase many different examples of how they manage to use Stable Diffusion to generate some very high fidelity videos that are quite temporally consistent. You can see right here from all of these examples. This is exactly what they’ve been able to do. I’m not sure how long they’ve been working on this, but a lot of the footage that I have seen does look far better than some of the previous examples that we have had from the other companies that have tried before. “Text to Video” is by far one of the most hardest things that we are looking to see when it comes to generative AI, and you can see on screen right now that some of these highway examples are very, very good, and some of these landscape examples are also looking very, very effective as well. They seemingly manage to stay pretty smooth and pretty coherent in what the animation looks like. So, I think Nvidia definitely is moving in the correct direction because they seem to have a step ahead in front of where everyone else is at this moment in time in the “Text to Video” field.

Then, of course, we had Google’s own “Texture Video” where you had input images being able to generate a fully-fledged video. A lot of these videos do look very, very well. The difference between this and Nvidia is honestly quite interesting because they both use very different ways to generate the result. This one uses a multitude of images whereas Nvidia’s is mainly based on “Text to Video.” But they can do exactly what Google is doing here, where you input a number of driving images, and then essentially you do get an output result. Where, of course, your driving image can actually be replaced or placed in a random environment. Once again, Google showing how far ahead they are in the AI race, and that they’re actually not that far behind their competitors.

And, of course, we had Elon Musk who decided that he was going to create his own “Truth GPT” because he realized, and many of the users who use chat GPT realize, that it is somewhat inherently biased towards certain arguments. For example, right here, this article was talking about how chat GPT’s inherent bias was particularly focused on certain topics and declined to respond when asked certain questions about certain topics. And then, of course, Elon Musk has decided that he wanted to create a chat GPT version that would pretty much do anything the user requested, as long as it’s within the standard ethical confines. But just reporting the data. A path to AI dystopia is to train an AI to be deceptive. So yeah, I’m going to start something which you call “Proof GBT” or uh a maximum truth-seeking AI. And it also talks about why it’s not really about politics but more about AI safety that tries to understand the nature of the universe. And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe. Hopefully, I would think that, I think, you know, because yeah, like like we like humanity could um decide to hunt down all the chimpanzees and kill them.

Then, of course, we had Google announcing that it’s going to be launching a new AI-powered browser, which is going to be integrated into Google, called “Project Maggie.” Essentially, this is going to be a direct competitor to Bing’s recent launch of Chachi BT within the browser. And they are going to try and expedite this as quick as possible and release it next month, which is actually going to be sometime in May. Now, Google decided that they needed to get this out as quickly as possible because recently Samsung and Apple were in talks of actually switching up browsers in which they were going to be having as the default option on their phones. So, Google was pretty much forced to start developing this as soon as possible.

Then, of course, we had the release of Auto GPT, which is now powering “Agent GPT.” Essentially, it is a form of AI which you give it one prompt, and then it runs off and decides to organize itself with a bunch of tasks. Then, it starts to scale the internet and essentially executes on those specific tasks. Now, this has absolutely blown up on GitHub, and this is something that people are now using to generate a bunch of different texts, a bunch of different articles, and complete many tasks. And I do think that this is likely what the future of AI is going to be because it removes a lot of the work that people initially will have to do, depending on the nature of the tasks. And many people are now discussing the fact that once you get these certain agents to be able to do certain things, companies are going to hire these agents for cents on the dollar. And then, of course, we might actually see some major layoffs coming.

Then, of course, we had Quora release their new AI chatbot, which is called Po. Now, what’s good about Po is that you’re able to customize different chatbots to your liking to fit a certain style or character. Then, talk with that chatbot in any manner possible. It’s also cool about Po is that they also give you access to GPT-4. They give you access to Claude Instant, which is another competitor to Chat GPT. And as you can see right here, I was setting up bots that allow me to interact with someone in Steve’s Jobs personality for advice on YouTube. And then, of course, they have the original Sagebot, which is a general knowledge bot that was released by themselves.

Now, of course, what we also had was we had Snapchat release their own AI bot, which is essentially My AI. It’s called their Sidekick. But if I’m being honest, the results that I’ve seen floating around the internet are very, very up and down. Some people report great responses, while others report responses that just simply aren’t even that accurate at all. But I’m guessing that they wanted to rush this out pretty quickly, which is why, I guess, we’re getting this in its unfinished sort of state. Of course, we had “The Bard,” which was initially rumored with bugs and many mathematical failures, getting a very nice update which is going to help it be powered with some of the processing power from Palm. Which is the 540 billion-parameter model that is very effective at using robots to do many real-world tasks. And this should make Bard much more effective at the tasks that it does.

Privacy Policy | Privacy Policy

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.