(Copyright 2023) by Joseph B. Baity (Charlotte, North Carolina) |
---|
Of late, “artificial intelligence” (AI) is dominating the news. With the recent rollout of ChatGPT (one of the most potent and advanced forms of AI), anyone with a computing device and an Internet connection can directly interface and “converse” with the chatbot—much as with another human—and hundreds of millions already have. In the past few decades, the concept of AI has grown from a scientific curiosity and a favorite topic of science fiction writers into a critical tool for scientific and medical research, large-scale data analysis, advanced human interfacing, cutting-edge weaponry, and what tech experts call “machine learning.” The Internet overflows with headlines, deep dives, and what will probably be an endless series of debates over AI’s development, potential, and inevitable impact—both positive and negative—upon society. As popular as ChatGPT is, AI is far more than a sophisticated chatbot. According to a May 7, 2023, article from the Washington Post entitled “A curious person’s guide to artificial intelligence,” Artificial intelligence is an umbrella term for a vast array of technology. There is no single definition, and even researchers disagree. Generally, AI is a field of computer science that focuses on creating and training machines to perform intelligent tasks, “something that, if a person was doing it, we would call it intelligence,” said Larry Birnbaum, a professor of computer science at Northwestern University. As technology advances, AI is becoming a part of everyday life for many, if not most, humans. AI is essential to developing smartphones and so-called “smart” devices. With computerized personal assistants, facial recognition, satellite navigation, enhanced or virtual reality, predictive medical diagnosis, the Internet of Things (IoT), and now ChatGPT, AI transforms how we interact with the physical world. As ubiquitous as AI is in our global society and as many benefits as it provides mankind, many are skeptical of its disruptive nature. In fact, many would compare its troublemaking potential today to that of the Internet in the 1990s. According to experts, AI will transform the workplace, with the potential to eliminate up to 80% of all current human jobs. The potential cost to society is difficult to measure or comprehend. Elon Musk, whose companies have helped to develop AI—and put it to use—recently called for a six-month moratorium on further development while calling for much more regulatory oversight. He fears that most of us, especially our elected leaders, have little idea of the technology’s capacity or potential dangers. The Tesla and Twitter owner posted on social media that “ChatGPT is scary good. We are not far from dangerously strong AI.” He later posted: “There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!” And Jack Clark, co-founder of the AI company, Anthropic, claimed his current number-one concern is that “AI systems can do more than their creators know they can do.” Only a few weeks ago, Dr. Geoffrey Hinton, commonly known as the godfather of AI, retired from his position at Google as head of AI development. From a May 2, 2023, article published in The Guardian: Hinton, 75, said he quit to speak freely about the dangers of AI and, in part, regrets his contribution to the field. He was brought on by Google a decade ago to help develop the company’s AI technology, and the approach he pioneered led the way for current systems such as ChatGPT. . . . Hinton’s concern in the short term is something that has already become a reality—people will not be able to discern what is true anymore with AI-generated photos, videos, and text flooding the Internet. Indeed, what is known as “deepfake” technology also frightens the former head of British cybersecurity. Professor Ciaran Martin warns: AI is now making it much easier to fake things, much easier to spoof voices, much easier to look like genuine information, much easier to put that out at scale. So having a sense of what is true and reliable, it’s going to become much more difficult. And that’s something that risks undermining the fabric of our society. While claims that AI will eventually develop sentience or consciousness, known as artificial general intelligence (AGI), are dubious at best, there are those whose beliefs, fantasies, and goals are directed toward some form of AI governance, attributing an almost god-like level of ability to the technology. OpenAI CEO Sam Altman, whose company is responsible for ChatGPT, recently posted the following on social media: Here is an alternative path for society: ignore the culture war. Ignore the attention war. Make safe AGI. Make fusion. Make people smarter and healthier. Make 20 other things of that magnitude. Start radical growth, inclusivity, and optimism. Expand throughout the universe. AI is the tech the world has always wanted. Altman’s almost deranged sense of optimism for a utopian world dominated by AI is also frighteningly naïve. Still, such beliefs could easily play into the hands of bad actors with nefarious plans. Currently, members of the World Economic Forum (WEF) and leaders within the United Nations (UN) are devoting much energy toward the “Great Reset,” the Fourth Industrial Revolution, and the “Shared Economy,” intending to expand AI’s influence over our daily lives significantly with the eventual creation of a hybrid AI that would be capable of managing a one-world form of government. However, this dystopic dream would struggle to get off the ground without widespread acceptance of AI’s “superior” abilities to govern our lives. While we can easily predict many remarkable—even unfathomable—advances in AI technology, the prudent, faithful, and watchful Christian will be aware of this effort to ultimately displace our Creator with “great signs and wonders to deceive, if possible, even the elect” (Matthew 24:24). Therefore, “Take heed that no one deceives you” (Matthew 24:4). Quite simply, it is a matter of trust. ———————————————————————————- Reprinted with permission from: Church of the Great God https://www.cgg.org/ ———————————————————————————- |
Iron Sharpening Iron In regard to: A Matter of Trust Article by Joseph B. Baity Comments by Denver Braughler (Muncie, Indiana) |
---|
The article “A Matter of Trust” was written several months prior to being published in issue #132. The author has significant gaps in understanding the technologies involved. A decentralized autonomous organization, for example, is a fair and efficient means to exchange value without trust by voluntary and informed consent. There was no six-month moratorium on development of artificial intelligence which Elon Musk suggested publicly in March 2023. Apart from the author’s demagoguery about AI eliminating 80% of jobs (as computerization already did), deranged optimism, and a dystopic [sic] dream, the author blindly accepted the notion that human governments are preferable. Just as self-driving automobiles are generally safer than those operated by humans, so too will be administrative functions handled by AI. If AI can lead to development of an honest governance over evildoers, it will be an improvement over human operators. A government that has no need to place trust in a human, that treats equally everyone who isn’t harming anyone, that follows rules which are visible to all and auditable by all — such is an improvement over human governance. The problem isn’t who runs government so much as it is that government is powerful enough to ruin your life. Socialism is still wrong no matter whether your life is managed by a politburo, a computer program, or a hybrid of these. The government is supposed to interfere only with evildoers (and those with nefarious plans). Everyone else should be left alone with no reason to fear. However, evildoers are found at all levels of current governments. The fault lies in the widespread acceptance of being governed by anyone but the Eternal King. |
Views: 8
Sign up to Receive [The "New" Church of God Messenger] weekly newsletter: