Developed by OpenAI, the AI tool has gone through many changes since it was first announced. While a free version exists, paid versions are also available known as ChatGPT Plus and ChatGPT Enterprise.
A free version of ChatGPT (GPT-3.5) is available for anyone to use on the ChatGPT website. All you have to do is sign up to get a login, and you can be mining the depth of the AI model in seconds. ChatGPT is also available on Android and Apple devices
A more advanced version of ChatGPT, known as ChatGPT-4, is also now available, but only to paid subscribers.
The AI has achieved a lot since it was announced, being embraced by huge companies, rejected by schools and used by millions of users each day. Met with equal parts controversy and praise, it is a truly divisive tool.
Now, with plenty of competitors (such as Google Bard), ChatGPT is having to constantly improve and offer new features, the most recent of which is the introduction of Dall-E 3 – an included image generator, capable of operating with ChatGPT to make your image dreams come true.
So how does the tool work? Why is it so controversial? And how do you actually use ChatGPT? With the help of AI researchers and experts, we’ve answered these questions and more below in this detailed guide to OpenAI’s most famous tool.
What is GPT-3, GPT-4 and ChatGPT?
GPT-3 (Generative Pretrained Transformer 3), GPT-3.5 and GPT-4 are state-of-the-art language processing AI models developed by OpenAI. They are capable of generating human-like text and have a wide range of applications, including language translation, language modelling, and generating text for applications such as chatbots.
GPT-3.5 is one of the largest and most powerful language-processing AI models to date, with 175 billion parameters.
GPT-3.5 gives a user the ability to give a trained AI a wide range of worded prompts. These can be questions, requests for a piece of writing on a topic of your choosing or a huge number of other worded requests.
Above, it described itself as a language-processing AI model. This simply means it is a program able to understand human language as it is spoken and written, allowing it to understand the worded information it is fed, and what to spit back out.
What can ChatGPT do?
With its 175 billion parameters, it’s hard to narrow down what GPT-3.5 does. The model is, as you would imagine, restricted to language. It can’t produce video, sound or images like its brother Dall-E 2, but instead has an in-depth understanding of the spoken and written word.
You can use ChatGPT-3.5 to:
Write essays
Write excel formulas
Write poems and movie scripts
Research topics and summarise content
Help you build a cover letter or CV
Write code
Plan a holiday
ChatGPT has a very wide range of abilities, everything from writing poems about sentient farts and cliché rom-coms in alternate universes, through to explaining quantum mechanics in simple terms or writing full-length research papers and articles.
While it can be fun to use OpenAI’s years of research to get an AI to write bad stand-up comedy scripts or answer questions about your favourite celebrities, its power lies in its speed and understanding of complicated matters.
Where we could spend hours researching, understanding and writing an article on quantum mechanics, ChatGPT can produce a well-written alternative in seconds.
It has its limitations and its software can be easily confused if your prompt starts to become too complicated, or even if you just go down a road that becomes a little bit too niche.
Equally, it can’t deal with concepts that are too recent. World events that have occurred in the past year will be met with limited knowledge and the model can produce false or confused information occasionally.
OpenAI is also very aware of the internet and its love of making AI produce dark, harmful or biased content. Like its Dall-E image generator before, ChatGPT will stop you from asking the more inappropriate questions or for help with dangerous requests.
What can ChatGPT-4 do?
A more advanced version of ChatGPT, called ChatGPT-4 is now available for paid subscribers ($20/£16 a month).
Here are just a few tasks the latest version of the AI model of capable of:
Learn a language. You can talk to ChatGPT in 26 languages
Create recipes. ChatGPT-4 is able to recognise images – you can send ChatGPT a picture of ingredients and ask the AI to create a recipe
Describe images to blind people
How much does ChatGPT cost?
ChatGPT-3.5 is free and easy to sign up for and use, simply:
Head over to the ChatGPT website and create an account. You can sign-up using a Google, Microsoft or Apple account, or any email address.
Logging in will present you with a very simple page. You are offered some example prompts, and some information about how ChatGPT works.
At the bottom of the page is a text box. This is where you can ask ChatGPT any of your questions or prompts.
ChatGPT-4, a more advanced version of ChatGPT is now available, but is only available via a paid subscription of $20 (£16) a month.
Is there a ChatGPT app?
There is an official ChatGPT app which you can download for free on Apple and Android devices. Be sure to download the official ChatGPT app from Open AI – there any many similar apps available that may have limited functionality or a paywall.
Once downloaded and installed, simply log in with your OpenAI account to get going.
How is GPT-4 different to GPT-3.5?
In essence, GPT-4 is the same as its predecessor GPT-3.5. However, there are some new features that boost the software’s abilities.
Mainly, GPT-4 includes the ability to drastically increase the number of words that can be used in an input… up to 25,000, 8 times as many as the original ChatGPT model.
Equally, OpenAI has stated that the latest version of their technology makes fewer mistakes that they are calling ‘hallucinations’. Previously, ChatGPT could become confused, offering up a nonsensical answer to your question, or even inputting stereotypes or false information.
Additionally, GPT-4 is better at playing with language and expressing creativity. In OpenAI’s demonstration of the new technology, ChatGPT was asked to summarise a blog post only using words that start with the letter ‘g’. It also has a better understanding of how to write poetry or creative writing, but it is still by no means perfect.
On top of this, OpenAI also displayed the potential of using images to initialise prompts. For example, the team showed an image of a fridge full of ingredients with the prompt “What can I make with these products?”. ChatGPT then returned a step-by-step recipe.
While it wasn’t demonstrated, OpenAI is also proposing the use of video for prompts. This would, in theory, allow users to input videos with a worded prompt for the language model to digest.
Creating recipes with images is a clever use of the technology, but it is only the tip of how images could be used with ChatGPT. The company also demonstrated the ability to create a whole website that successfully ran JavaScript with just a handwritten sketch of a website.
As a tool to complete jobs normally done by humans, GPT-3.5 was mostly competing with writers and journalists. However, GPT-4 is being shown to have the ability to create websites, complete tax returns, make recipes and deal with reams of legal information.
On the face of it, GPT-3.5’s technology is simple. It takes your requests, questions or prompts and quickly answers them. As you would imagine, the technology to do this is a lot more complicated than it sounds.
The model was trained using text databases from the internet. This included a whopping 570GB of data obtained from books, web texts, Wikipedia, articles and other pieces of writing on the internet. To be even more exact, 300 billion words were fed into the system.
As a language model, it works on probability, able to guess what the next word should be in a sentence. To get to a stage where it could do this, the model went through a supervised testing stage.
Here, it was fed inputs, for example “What colour is the wood of a tree?”. The team has a correct output in mind, but that doesn’t mean it will get it right. If it gets it wrong, the team inputs the correct answer back into the system, teaching it correct answers and helping it build its knowledge.
It then goes through a second similar stage, offering multiple answers with a member of the team ranking them from best to worst, training the model on comparisons.
What sets this technology apart is that it continues to learn while guessing what the next word should be, constantly improving its understanding of prompts and questions to become the ultimate know-it-all.
Think of it as a very beefed-up, much smarter version of the autocomplete software you often see in email or writing software. You start typing a sentence and your email system offers you a suggestion of what you are going to say.
What does it mean when ChatGPT is at full capacity?
If you try to use ChatGPT and you receive the error message telling you it’s “at capacity”, it likely means that too many people are currently using the AI tool.
Essentially, the OpenAI servers can only handle so much traffic at any given time. If too many people are trying to access it at once, ChatGPT’s servers may buckle under the weight.
If you have encountered the “ChatGPT is at capacity right now” error message, you need to try again later. You can try to refresh the page and what have you, but time is the healer here.
What does ChatGPT stand for?
The ‘GPT’ in ChatGPT stands for Generative Pre-trained Transformer.
Can you use ChatGPT to write a CV?
ChatGPT won’t write a CV out of thin air for you, instead, you will need to prompt it with your relevant experience, what type of job you’re applying for, and potentially provide more information including pasting your old CV that needs updating or an example of one for a similar field.
ChatGPT: latest news
Dall-E integration
In September 2023, OpenAI announced that ChatGPT would be integrated with the latest version of Dall-E.
Dall-E is an AI art generator, made by ChatGPT creators OpenAI. It was the first of OpenAI’s projects to really blow up online and is the model that most AI art generators are based on. Now on its third generation, OpenAI has made the decision to pair the two models together.
For anyone who has ChatGPT Plus or Enterprise, this feature will be available. Images can be made via ChatGPT, and the model can even aid you in creating prompts and editing the images to better suit your needs.
It is yet to be announced whether this feature will later come to ChatGPT’s free tier but for now, it is remaining an exclusive feature for paying customers.
ChatGPT Enterprise
Along with the addition of ChatGPT Plus, OpenAI introduced another pay-to-use version of the tool known as ChatGPT Enterprise. This offers higher levels of security and privacy, unlimited higher-speed searches and a host of other features.
This version is intended for businesses looking to get more out of ChatGPT as a work tool. OpenAI has stated that it will not train on the data created by businesses.
A handful of the biggest Chinese tech firms have launched their own AI chatbots after receiving government approval.
The biggest of these is Ernie bot, an AI model developed by Baidu, China’s leading online search provider. Similar to ChatGPT, users can ask questions of Ernie bot, using prompts to research topics, summarise articles, and much more.
The Baidu app is currently available to download in the UK on Android and Apple devices. However, all text will appear in Chinese.
Experts warn AI risks our extinction
The heads of ChatGPT’s developer, OpenAI, have signed a statement (alongside many AI experts) warning of the need to address the human extinction risk associated with AI.
The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Members of the OpenAI team to sign the statement include its CEO, Sam Altman, and its chief scientist, Ilya Sutskever. The list of signatories also includes the CEO of Google DeepMind, many university professors and public figures such as Bill Gates.
Concerns for the future
A general apprehension has followed artificial intelligence throughout its history and things are no different with ChatGPT. Critics have been quick to raise the alarms over this technology, but now even those closest to it are utilising caution.
An open letter has been drafted calling for all AI labs to pause for at least six months on the development of systems more powerful than GPT-4. This would include OpenAI’s work on GPT-5 – the next version of technology ChatGPT will eventually run on.
This open letter has been signed by prominent AI researchers, as well as figures within the tech industry including Elon Musk, Steve Wozniak and Yuval Noah Harari.
This letter states that the pause should be public and verifiable, arguing that companies like OpenAI, Microsoft and Google are entering a profit-driven race to develop and release new AI models at a dangerous pace.
This comes at the same time as a report from Goldman Sachs that suggested 300 million full-time jobs could be impacted by AI systems like ChatGPT, escalating existing concerns around these platforms.
Where is GPT-4 being used?
GPT-3 was already being adapted by a lot of big companies, inputting the technology into search engines, apps and software, but OpenAI seems to be pushing GPT-4 even harder.
Microsoft’s Bing is the main user of the technology right now, but OpenAI has reported that the software is being used by companies like Khan Academy to help students with coursework and give teachers ideas for lessons.
Equally, the language-learning app Duolingo has got involved with something called ‘Duolingo Max’ with two features. One will help explain why your answer to a question was right or wrong, the other will set up role plays with an AI to play out language in different scenarios.
More companies are adopting this technology, including the payment processing company Stripe and customer service brand Intercom.
Are there any other AI language generators?
While GPT-3 has made a name for itself with its language abilities, it isn’t the only artificial intelligence capable of doing this. Google’s LaMDA made headlines when a Google engineer was fired for calling it so realistic that he believed it to be sentient.
Google launched a chatbot powered by LaMDA called Bard on March 21, 2023. It’s similar to ChatGPT but benefits from having access to up-to-date information.
There are also plenty of other examples of AI language software out there created by everyone from Microsoft to Amazon and Stanford University. These have all received a lot less attention than OpenAI or Google, possibly because they don’t offer fart jokes or headlines about sentient AI.
Most of these models are not available to the public, but OpenAI has begun opening up access to GPT-3 during its test process, and Google’s LaMDA is available to selected groups in a limited capacity for testing.
Where ChatGPT thrives and fails
The GPT-3.5 software is obviously impressive, but that doesn’t mean it is flawless. Through the ChatGPT function, you can see some of its quirks.
Most obviously, the software has a limited knowledge of the world after 2021. It isn’t aware of world leaders that came into power since 2021, and won’t be able to answer questions about recent events.
This is obviously no surprise considering the impossible task of keeping up with world events as they happen, along with then training the model on this information.
Equally, the model can generate incorrect information, getting answers wrong or misunderstanding what you are trying to ask it.
If you try and get really niche, or add too many factors to a prompt, it can become overwhelmed or ignore parts of a prompt completely.
For example, if you ask it to write a story about two people, listing their jobs, names, ages and where they live, the model can confuse these factors, randomly assigning them to the two characters.
Equally, there are a lot of factors where ChatGPT is really successful. For an AI, it has a surprisingly good understanding of ethics and morality.
When offered a list of ethical theories or situations, ChatGPT is able to offer a thoughtful response on what to do, considering legality, people’s feelings and emotions and the safety of everyone involved.
It also has the ability to keep track of the existing conversation, able to remember rules you’ve set it, or information you’ve given it earlier in the conversation.
Two areas the model has proved to be strongest are its understanding of code and its ability to compress complicated matters. ChatGPT can make an entire website layout for you, or write an easy-to-understand explanation of dark matter in a few seconds.
Where ethics and artificial intelligence meet
Artificial intelligence and ethical concerns go together like fish and chips or Batman and Robin. When you put technology like this in the hands of the public, the teams that make them are fully aware of the many limitations and concerns.
Because the system is trained largely using words from the internet, it can pick up on the internet’s biases, stereotypes and general opinions. That means you’ll occasionally find jokes or stereotypes about certain groups or political figures depending on what you ask it.
For example, when asking the system to perform stand-up comedy, it can occasionally throw in jokes about ex-politicians or groups who are often featured in comedy bits.
Equally, the models love of internet forums and articles also gives it access to fake news and conspiracy theories. These can feed into the model’s knowledge, sprinkling in facts or opinions that aren’t exactly full of truth.
In places, OpenAI has put in warnings for your prompts. Ask how to bully someone, and you’ll be told bullying is bad. Ask for a gory story, and the chat system will shut you down. The same goes for requests to teach you how to manipulate people or build dangerous weapons.
Will ChatGPT be banned in schools?
Sam Altman, co-founder of OpenAI. – Photo credit: Getty
While a number of companies are looking to implement ChatGPT, in other areas it is quickly being banned.
In New York, the city’s education department has ruled that the tool will be forbidden across all devices and networks in New York public schools.
There are two main reasons for this decision. First, the chat model has been shown to make mistakes and isn’t always accurate, especially with information from the past year.
Secondly, there is a real risk of plagiarism with students able to get ChatGPT to write their essays for them.
While New York is the first place to publicly ban the software, it is likely to be a decision made elsewhere too. However, some experts have argued that this software could actually enhance learning.
“ChatGPT and other AI-based language applications could be, and perhaps should be, integrated into school education. Not indiscriminately, but rather as a very intentional part of the curriculum. If teachers and students use AI tools like ChatGPT in service of specific teaching goals, and also learn about some of their ethical issues and limitations, that would be far better than banning them,” says Kate Darling, a research scientist at the MIT Media Lab.
“But, in absence of resources for teachers to familiarise themselves with the technology, schools may need to enact some policies restricting its use.”
In this way, Darling emphasises a belief held by many in the world of artificial intelligence. Instead of ignoring or banning it, we should learn how to interact with it safely.
This is an opinion mirrored by Sam Illingworth, an associate professor in the department of Learning Enhancement at Edinburgh Napier University.
“AI is very much here to stay, so why try to fight it? These are tools that our students will be using in the workforce, so it seems very strange to say don’t use them for three years, pretending they don’t exist for now,” says Illingworth.
“These are things that have the potential to reduce workload and improve efficiency, our responsibility as educators is to decide how to utilise it.”
Artificially intelligent eco-systems
Artificial intelligence has been in use for years, but it is currently going through a stage of increased interest, driven by developments across the likes of Google, Meta, Microsoft and just about every big name in tech.
However, it is OpenAI which has attracted the most attention recently. The company has now made an AI image generator, a highly intelligent chatbot, and is in the process of developing Point-E – a way to create 3D models with worded prompts.
In creating, training and using these models, OpenAI and its biggest investors have poured billions into these projects. In the long-run, it could easily be a worthwhile investment, setting OpenAI up at the forefront of AI creative tools.
About our experts, Kate Darling and Sam Illingworth
Dr Kate Darling is a research scientist at the MIT Media Lab. Her interest is in how technology intersects with society.
Sam Illingworth is an associate professor in the department of learning enhancement at Edinburgh Napier University.
Read more:
ChatGPT has quickly become the golden child of artificial intelligence. Used by millions, the AI chatbot is able to answer…
How about users? I’m a European who will be able to take advantage of all these DMA-related benefits. I already know I don’t want sideloading on iPhone (or Android, for that matter). But interoperability seems like the dumbest requirement of the DMA, a feature I don’t want to take advantage of in WhatsApp or any competing instant messaging app that might be labeled a gatekeeper.
Meta’s explanation of how WhatsApp interop will work is also the best explanation for the unnecessary interoperability requirement. Why go through all this trouble to fix something that wasn’t broken in the first place?
What is interoperability?
Meta explained in a detailed blog post all the work behind making WhatsApp and Facebook Messenger compatible with competing chat apps that ask to be supported.
Tech. Entertainment. Science. Your inbox.
Sign up for the most interesting tech & entertainment news out there.
That’s what interop hinges on. First, a WhatsApp rival must want their app to work with Meta’s chat platforms. Even if that’s achieved, it’s up to the WhatsApp/Messenger user to choose whether to enable the functionality.
Meta says it wants to preserve end-to-end WhatsApp encryption after interop support arrives. It’ll push WhatsApp and Messenger’s Signal encryption protocol for third-party chat apps. Other alternatives can be accepted if they’re at least as good as Signal.
How will it work?
Meta has been working for two years to implement the changes required by the DMA. But things will not just work out of the box starting Thursday. A competing service must ask for interop support and then wait at least three months for Meta to deploy it.
It might take longer than that for WhatsApp and Messenger to support that service. Rinse and repeat for each additional chat app that wants to work with WhatsApp.
That’s a lot of work right there, both for Meta and WhatsApp competitors. I can’t see how any of this benefits the user. The interop chat experience isn’t worth it to me. Here’s what you’ll get in the first year. Because yes, the DMA has specific requirements in place for what features interop chats should offer:
Interoperability is a technical challenge – even when focused on the basic functionalities as required by the DMA. In year one, the requirement is for 1:1 text messaging between individual users and the sharing of images, voice messages, videos, and other attached files between individual end users. In the future, requirements expand to group functionality and calling.
Thankfully, the DMA also focuses on privacy and security. That’s why WhatsApp and Messenger will focus on ensuring that chats remain end-to-end encrypted. I’ll note that Messenger end-to-end encryption started rolling out months ago, and it might not be available in all markets.
A screenshot from WhatsApp beta 2.24.6.2 shows you can disable interoperability and choose which third-party apps to chat with. Image source: WABetaInfo
Meta’s blog does a great job explaining what’s going on under the hood with interop chats between WhatsApp and third-party apps. It underlines all the massive work and resources Meta is deploying for this.
I’m actually kind of in awe of Meta’s willingness to comply with these DMA provisions. All this effort makes me wonder what Meta can gain from the whole interoperability thing. Maybe the endgame is converting even more users to WhatsApp and Messenger, but I digress. After all, it’s not like Meta could avoid complying with the DMA.
I’ll also say that Meta doesn’t seem to restrict interoperabiltiy to the European Union, as Apple does with iPhone sideloading. Or, at least, restrictions aren’t the focus of this blog, though the title clarifies it’s about chats in Europe: “Making messaging interoperability with third parties safe for users in Europe.”
The obvious warning
While Meta also explains how encryption and user authentication will work, it acknowledges that it’s not in full control. Therefore, it can’t promise the user the same level of security and privacy for Whatsapp interop chats as Whatsapp-to-Whatsapp chats:
It’s important to note that the E2EE promise Meta provides to users of our messaging services requires us to control both the sending and receiving clients. This allows us to ensure that only the sender and the intended recipient(s) can see what has been sent, and that no one can listen to your conversation without both parties knowing.
While we have built a secure solution for interop that uses the Signal Protocol encryption to protect messages in transit, without ownership of both clients (endpoints) we cannot guarantee what a third-party provider does with sent or received messages, and we therefore cannot make the same promise.
[…] users need to know that our security and privacy promise, as well as the feature set, won’t exactly match what we offer in WhatsApp chats.
If you care about WhatsApp interoperability should read the entire blog post at this link. Then promptly disable the feature once WhatsApp informs you that interop support is ready.
It’s March 7th, the big deadline day for the Digital Markets Act (DMA). The law came into effect on Thursday,…
The Meta AI tool is present on Facebook, Instagram, WhatsApp and Messenger to varying degrees, appearing in feeds, chats, searches and other components of the platforms. While Meta advertised the tool as a way to “get things done, learn, create and connect with the things that matter to you,” many users have found the unsolicited presence of AI functionality nothing more than annoying.
The bad news is that you cannot simply opt out of the Meta AI entirely. There is no kill switch to turn it all off, but people are still dedicated to finding ways around the unwanted intrusions. One of the most bothersome features, Meta AI chat, for example, can be curtailed following a few simple steps.
Hoping to turn your Meta AI chat off? Here’s how in a few quick steps:
How to turn off Meta AI chat on Facebook
◾ Open Facebook and look for the search bar at the top of the page. Instead of a magnifying glass, it now appears as a blue-gradient circle.
◾ Once in the search bar, click the blue arrow that appears to the right.
◾ This will take you to the Meta AI chat. Look for the “i” icon in the upper right corner and click it.
◾ Click the “mute” option that appears on the next page.
◾ Select how long you want to mute the chat. If you’re looking to do so indefinitely, choose the “until I change it” option.
How to turn off Meta AI chat on Instagram
The process on Instagram is much the same as Facebook, just on a slightly different interface.
◾ Locate the search bar at the top of the page and click it. Again, what was formerly a magnifying glass may now appear as a circle. Click the arrow that appears to the right of the search bar.
◾ This will again bring you to Meta AI chat. Click the “i” icon located in the top right-hand corner of the page.
◾ Select the “mute” option that appears on the following page.
◾ Click the slider at the bottom that says “mute notifications,” then select the duration of time you want the chat muted. Again, choose the “until I change it” option if you want it turned off indefinitely.
While these steps won’t scrub the presence of Meta AI completely from your Facebook and Instagram experiences, they will mute and prevent notifications from Meta AI chat, one of the features netizens have found most bothersome.
You can continue to use the “search” functions on both platforms like normal but may see AI-suggested searches interspersed in regular search results.
AI functions may appear when scrolling through your feed as well, appearing as full-sized advertisements and cards or under regular posts with offers of “ask Meta AI” and “tell me more about…..” Unfortunately, this cannot be turned off at this time, but you can avoid clicking them to continue your browsing as usual.
In WhatsApp and Messenger, simply delete the Meta AI chat thread that appears in the apps. This will remove the conversation and remove it from your contact list, making the AI functions relatively easy to ignore.
Artificial intelligence, or AI, remains one of the hot-button tech issues of the year, with an increasing number of companies…