- “AutoSQL and Cloud Pak for Data: IBM’s touting a breakthrough in cloud-based database management. Basically, where businesses serve up answers to customer queries using cloud-managed AI databases, this will significantly speed things up.
- According to IBM, the new system gives answers to distributed queries “as much as 8x faster than previously and at nearly half the cost of other compared data warehouses.”
- Watson Orchestrate: The “no code” AI paradigm is picking up steam and this is a great example of how that can be useful. Orchestrate is an AI system designed to augment workflows for individuals.”
“There are various ways in which conversational AI fits in the hyper-personalised banking for hassle-free customer experience.
Four reaping benefits of hyper-personalised banking system are:
- High number of pages conversion
- Increase in customer trust and loyalty
- Reduction in sales time
- Providing services to ease decision-making process”
- “We believe that more targeted, sustained investments in AI for social impact (sometimes called “AI for good”)—rather than multiple, short-term grants across a variety of areas—are important for two reasons.
- First, AI often has large upfront costs and low ongoing or marginal costs. AI systems can be hard to design and operationalize, and they require an array of potentially costly resources
- Another reason why targeted, sustained funding is important is because any single point of failure—lack of training data, misunderstanding users’ needs, biased results, technology poorly designed for unreliable Internet—can hobble a promising AI-for-good product.”
- “In case of the finance industry, artificial intelligence and machine language both have several applications. Chatbots, robotic process automation are some of the examples of such applications of AI in finance.
- According to a report, using AI technology, financial sectors can save up to $447 billion by 2023.”
- “In a new paper titled ‘Why AI is Harder Than We Think,’ Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts.
- 1) Narrow AI and general AI are not on the same scale. Designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”
- 2) The easy things are hard to automate. ‘The things that we humans do without much thought—looking out in the world and making sense of what we see, carrying on a conversation, walking down a crowded sidewalk without bumping into anyone—turn out to be the hardest challenges for machines,’
- 3) Anthropomorphizing AI doesn’t help. We use terms such as “learn,” “understand,” “read,” and “think” to describe how AI algorithms work. While such anthropomorphic terms often serve as shorthand to help convey complex software mechanisms, they can mislead us to think that current AI systems work like the human mind.
- 4) AI without a body. ‘Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated.’”