“AI will replace those people who are not very specialised, someone doing unproductive tasks which can be easily automated.” Writesonic founder and CEO Samanyou Garg is convinced artificial intelligence will impact entry-level jobs, but knows those with their unique style or expertise, for instance in writing, won’t be replaced, even as you will still need humans to proofread the content that AI generates. Garg, 25, whose startup is behind the AI-based writing tool Writesonic and Chatsonic, a chatbot that answers questions, argues that AI will instead act as “a layer of augmentation” to help increase productivity, get more clients and churn out more content. “I don’t think people will be losing jobs. But the menial work, the things that can be easily done, both in terms of cost efficiency and productivity, if a bot or AI does that, then you have a lot more time building your skill sets and focusing on more important things,” Garg told indianexpress.com in an interview. Launched in 2020, Writesonic is an AI-powered writing tool that can be used to create any text-based content including full-length blog posts, press releases, ad copies, etc. Garg claims to have more than 1 million users using Writesonic and its clients include PR agencies, marketing agencies and even publications, which he didn’t name. Garg says most people are using Writesonic to generate SEO-optimised content for their blog posts. Lately, the AI writing tool is increasingly used to rewrite or paraphrase content. He explained: “60 per cent of Writesonic users are freelancers and writers.” As the reach of AI writing tools like Writesonic grows, many fear that AI could kill the journalism we know today. “If you already have something and you just need to rewrite it, AI can turn a boring copy into an exciting piece but AI is not at a stage where you automate it and rely completely on a bot,” he said, adding that there are times when AI changes some facts that could reduce the accuracy of a piece. “A human should always be in the loop to fact check, edit it, add links or any facts and improve any non-factual information.” For years, analysts and experts have predicted that artificial intelligence will be closer to replicating human behaviour. Last year, OpenAI, a San Francisco-based artificial intelligence lab, released ChatGPT, a chatbot that gives human-like responses to any question. The AI chatbot became a global sensation, with millions using the chatbot to ask questions, write essays and create poetry. The maturity at which ChatGPT responds to queries, as if you are chatting with another person, could reinvent the traditional search engine Google, photo editors like Adobe Photoshop and voice assistants like Siri and Alexa. OpenAI’s DALL-E, which could generate an image based on simple text prompts, is also getting a lot of attention. “Even though we integrate [Chatsonic] with Google, we get the top results from Google and can quickly produce a very comprehensive and informative answer to any question. But still, it’s not at the stage where it can replace Google yet,” he explains. “The main limitation right now is that these AI models are kind of expensive, both to train, to run, and to give out results. With the current limitation of technology, it's not possible to scale it at that level, the way you do Google searches,” he added. Garg, however, believes AI chatbots will eventually replace voice-based assistants like Siri, Alexa and Google Assistant. “If not now, in the next few months, they [AI chatbots] will definitely replace these assistants unless they improve,” he says. Chatsonic, the text-based artificial intelligence tool which went live last month, has 80,000 users. Users can ask questions ranging from simple factual queries like “Who is the CEO of Tesla?” to absurd ones like “How many cows are there in India?” and get clear responses. Both Writesonic and Chatsonic use a mixture of OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) models as well as proprietary models. Garg warns that although the responses look accurate, AI chatbots can sometimes offer incorrect answers. “No large language model is immune to mistakes,” he said. But amid the popularity of AI-powered tools, some school teachers and university lecturers have expressed concerns that these chatbots could be used for plagiarising exam coursework and writing essays. “If it is about some in-depth topics, it is still not possible for these bots to write to the same level of depth as humans would do,” Garg said. “Most bots don’t know about a specific model or a specific piece of research and that is where the AI won't be able to help you but it can help improve what you've already written and make it sound even better,” he continued. As these bots are now being used by millions, they can also be used to spread misinformation. Garg says his company has content moderation systems in place, where certain kinds of topics like political content, hate speech, or sexual content are straightaway rejected. “We first verify your prompt and if it contains hate speech or political content, we reject it,” Garg explains. The success of OpenAI’s ChatGPT has generated a lot of interest in generative AI-based startups including Garg’s. “Venture capitalists are shifting from cryptocurrency and web3, and moving to generative AI,” he says.