• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

010101

Artificial Intelligence Resources

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

Uncategorized

Bill Gates weighs in on the future of artificial intelligence – The Jerusalem Post

March 26, 2023 by AVA Leave a Comment

Bill Gates, billionaire businessman and philanthropist, expressed concern that Artificial Intelligence (AI) could take over the world, according to a recent post on his blog, GatesNotes.

In his blog post, Gates drew attention to an interaction he had with AI in September. He wrote that, to his astonishment, the AI received the highest possible score on an AP Bio exam. 

The AI was asked, “what do you say to a father with a sick child?” It then provided an answer which, Gates claims, was better than one anyone in the room could have provided. The billionaire did not include the answer in his blog post.

This interaction, Gates said, inspired a deep reflection on the way that AI will impact industry and the Gates Foundation for the next 10-years.

The positive potential for AI

As a philanthropist, Gates chose to focus on how AI technology could be used to reduce inequality.

Gates Foundation Goalkeepers event in New York (credit: REUTERS)Gates Foundation Goalkeepers event in New York (credit: REUTERS)

“The evidence shows that having basic math skills sets students up for success, no matter what career they choose. But achievement in math is going down across the country, especially for Black, Latino, and low-income students. AI can help turn that trend around,” wrote Gates.

“AI will enhance your work—for example by helping with writing emails and managing your inbox,” he promised.

He stated that AI will improve the efficiency of everyday workers and suggested that it is beneficial to see AI as a “digital personal assistant.” He acknowledged that populations will need to be retrained as AI takes on new roles and that governments will need to be responsible for overseeing this.

More specifically, Gates predicted that AI will have a strong impact in reducing the burden on care workers. If AI were able to undertake tasks like “filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit” then more time can be allocated to patient care. 

Adding to this, Gates noted that AI can have a massive impact on poorer countries where access to medical facilities is limited.

“AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.”

AI may even be able to make significant contributions to medical innovation, Gates claimed. 

He explained that “the amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly.”

He predicted that AI will eventually be able to predict side effects and the correct dosages for individual patients.

In the field of agriculture, Gates insisted that “AIs can help develop better seeds based on local conditions, advise farmers on the best seeds to plant based on the soil and weather in their area, and help develop drugs and vaccines for livestock.”

The negative potential for AI

“Governments and philanthropy will need to play a major role in ensuring that it reduces inequity and doesn’t contribute to it. This is the priority for my own work related to AI,” wrote Gates. 

Gates acknowledged that AI will likely be “so disruptive [that it] is bound to make people uneasy” because it “raises hard questions about the workforce, the legal system, privacy, bias, and more.”

AI is also not a flawless system, he explained, because “AIs also make factual mistakes and experience hallucinations.”

Gates emphasized that there is a “threat posed by humans armed with AI” and the potential that AI “decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?” 

Sign up for the Business & Innovation Newsletter >>

source

Filed Under: Uncategorized

Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’ – Fox News

March 26, 2023 by AVA Leave a Comment

This material may not be published, broadcast, rewritten, or redistributed. ©2023 FOX News Network, LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset. Powered and implemented by FactSet Digital Solutions. Legal Statement. Mutual Fund and ETF data provided by Refinitiv Lipper.
Fox News correspondent Mark Meredith has the latest on ChatGPT on ‘Special Report.’
Geoffrey Hinton, a computer scientist who has been called “the godfather of artificial intelligence”, says it is “not inconceivable” that AI may develop to the point where it poses a threat to humanity.
The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.
Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less,” Hinton predicted. Asked specifically the chances of AI “wiping out humanity,” Hinton said, “I think it’s not inconceivable. That’s all I’ll say.” 
CHATGPT NEW ANTI-CHEATING TECHNOLOGY INSTEAD CAN HELP STUDENTS FOOL TEACHERS
Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019.  (Cole Burston/Bloomberg via Getty Images)
Artificial general intelligence refers to the potential ability for an intelligence agent to learn any mental task that a human can do. It has not been developed yet, and computer scientists are still figuring out if it is possible.
Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves. 
“That’s an issue, right. We have to think hard about how you control that,” Hinton said.
MICROSOFT IMPOSES LIMITS ON BING CHATBOT AFTER MULTIPLE INCIDENTS OF INAPPROPRIATE BEHAVIOR
A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Thursday, Jan. 5, 2023. New York City school officials started blocking this week the impressive but controversial writing tool that can generate paragraphs of human-like text.  (AP Photo/Peter Morgan)
But the computer scientist warned that many of the most serious consequences of artificial intelligence won’t come to fruition in the near future.
“I think it’s very reasonable for people to be worrying about these issues now, even though it’s not going to happen in the next year or two,” Hinton said. “People should be thinking about those issues.”
Hinton’s comments come as artificial intelligence software continues to grow in popularity. OpenAI’s ChatGPT is a recently-released artificial intelligence chatbot that has shocked users by being able to compose songs, create content and even write code.
In this photo illustration, a Google Bard AI logo is displayed on a smartphone with a Chat GPT logo in the background. (Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images)
CLICK HERE TO GET THE FOX NEWS APP
“We’ve got to be careful here,” OpenAI CEO Sam Altman said about his company’s creation earlier this month. “I think people should be happy that we are a little bit scared of this.”
Get a daily look at what’s developing in science and technology throughout the world.
Subscribed
You’ve successfully subscribed to this newsletter!
This material may not be published, broadcast, rewritten, or redistributed. ©2023 FOX News Network, LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset. Powered and implemented by FactSet Digital Solutions. Legal Statement. Mutual Fund and ETF data provided by Refinitiv Lipper.

source

Filed Under: Uncategorized

This project at University of Chicago aims at thwarting artificial intelligence from mimicking artistic styles – details – The Financial Express

March 26, 2023 by AVA Leave a Comment

The Financial Express
Anyone who has held paper and a paintbrush knows the effort it goes into making a piece of art. The effort went for a toss last year when our timelines across social media platforms got inundated by AI-generated artworks, stunning yet scary to fathom. Machines replacing human labour is something we have often heard, that it could happen to artists was somewhat inconceivable. And that an AI tool can generate artwork by just mere prompts can leave any artist uneasy.
While artificial intelligence (AI) is doing its thing, an academic research group of PhD students and professors at the University of Chicago, USA, have launched a tool to thwart it. Glaze is their academic research project aimed at thwarting AI from mimicking the style of artists. “What if you could add a cloak layer to your digital artwork that makes it harder for AI to mimic? Say hello to Glaze,” it says on its website.
Also read: Here’s how much Mukesh Ambani’s chef earns; Check salary and compensation here
“Glaze is a tool to help artists to prevent their artistic styles from being learned and mimicked by new AI-art models such as MidJourney, Stable Diffusion and their variants. It is a collaboration between the University of Chicago SAND Lab and members of the professional artist community, most notably Karla Ortiz. Glaze has been evaluated via a user study involving over 1,100 professional artists,” Glaze’s website reads.
Glaze Beta2 has been made available for download starting March 18.
It is a normal exercise for several artists to post their work online to build a portfolio and even earn from it. However, generative AI tools have been equipped to create artworks in the same style after just seeing a few of the original ones.
This is what Glaze aims to thwart by creating a cloaked version of the original image.
“Glaze generates a cloaked version for each image you want to protect. During this process, none of your artwork will ever leave your own computer. Then, instead of posting the original artwork online, you could post the cloaked artwork to protect your style from AI art generators,” it says.
The way it happens is, when an artist wants to post her work online but does not want AI to mimic it, she can upload her work, in digital form, to Glaze. The tool then makes a few changes, which are hardly visible to the human eye. “We refer to these added changes as a ‘style cloak’ and changed artwork as ‘cloaked artwork,’” it says. While the cloaked artwork appears identical to the original to humans, the machine picks up the altered version. Hence, whenever it gets a prompt, say “Mughal women in south Delhi in MF Husain style,” the artwork generated by AI will be very different from the said artist’s style. This protects the artistic style to be mimicked without the artist’s consent.
While Glaze Beta2 is available for download, the research is under peer review.
Glaze, however, has its share of shortcomings. Like changes made to certain artworks that have flat colours and smooth backgrounds, such as animation styles, are more visible. “While this is not unexpected, we are searching for methods to reduce the visual impact for these styles,” the makers say.
Also, “unfortunately, Glaze is not a permanent solution against AI mimicry,” they say. It is because, “AI evolves quickly, and systems like Glaze face an inherent challenge of being future-proof. Techniques we use to cloak artworks today might be overcome by a future countermeasure, possibly rendering previously protected art vulnerable,” they add.
Although the tool is far from perfect, its utility for artists is beyond any doubt. The issue becomes all the more glaring when one considers multiple artists who find it tough to earn a decent living through this craft. The AI companies, on the other hand, many of whom charge a subscription fee, earn millions.
Also read: Akash Ambani, Karan Adani to Ananya Birla: Here are heirs and heiresses of India’s most prominent business empires
Rules and laws are yet to catch up with the pace at which AI is advancing, leaving little for artists to fight with to protect their work. This is where projects like Glaze rise to prominence.
“It is important to note that Glaze is not panacea, but a necessary first step towards artist-centric protection tools to resist AI mimicry. We hope that Glaze and followup projects will provide some protection to artists while longer term (legal, regulatory) efforts take hold,” it says on Glaze’s website.
Meanwhile, the technology has already hopped to the next stop. The startup Runway AI has come up with a video generator that generates videos, merely by a prompt.
Get live Share Market updates and latest India News and business news on Financial Express. Download Financial Express App for latest business news.

source

Filed Under: Uncategorized

Top 4 ways Artificial Intelligence can improve your security posture … – BetaNews

March 26, 2023 by AVA Leave a Comment

Ignore the hype: Artificial intelligence (AI) can improve your security posture now.
We’ve been waiting for AI to deliver benefits to cybersecurity for a long time. ChatGPT aside, AI has been a hot-and-cold topic for decades, with periods of overhyped promises interspersed with periods of cynical rejection after failure to deliver on all of those promises. No wonder plenty of security leaders are wary. Yet, despite the wariness, AI is helping to improve cybersecurity today and will increasingly provide substantial security benefits — and challenges.
Creating a strong security posture involves three key elements:
To achieve these, it’s important to collect all relevant data and leverage big data technology to manage, orchestrate, and make sense of it.
Nowadays, to effectively analyze and apply data, we need both human and machine-generated intelligence. As defined in Wikipedia, intelligence is “the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.”
Human intelligence is challenging for security analysts to scale. Plus, with the increasing complexity of data, analysts require advanced skills and expertise that take years to develop — and it’s a talent pool that’s in short supply.
Consequently, AI is a practical solution for scaling cybersecurity. With reliable AI systems, companies can reduce dependence on experts in both data and security fields.
Four ways to improve enterprise security using AI include:
The quality of AI algorithms depends on the training data. How do you ensure that the AI model lives up to expectations and does not add to alert fatigue by generating more false positives?
Over the years, AI systems have undergone significant advancements, and not all systems necessarily require supervised learning techniques. Unsupervised systems, such as anomaly detection, are commonly used and highly sought after in security applications. Anomaly detection, for instance, can significantly reduce false positive rates.
Furthermore, with the support of standard bodies, such as MITRE, who maintain an ontology of the continually evolving threat landscape, it is feasible to develop highly sophisticated AI systems without “training data.”
AI solutions for cybersecurity are working today, for example in Resolution Intelligence Cloud from Netenrich. These solutions continue to improve, independently of hype, and should be part of any cybersecurity team’s arsenal.
Praveen Hebbagodi is Chief Technology Officer, Netenrich.
© 1998-2023 BetaNews, Inc. All Rights Reserved. Privacy Policy – Cookie Policy.

source

Filed Under: Uncategorized

How will ChatGPT and artificial intelligence be used in our … – Irish Examiner

March 26, 2023 by AVA Leave a Comment

When ChatGPT first started to gain traction, the immediate reaction of most in education was to worry about how it could be used to cheat, as it produces original text that evades traditional plagiarism detection tools.  Picture: Alamy Stock
It’s not every day a spirit visits the classroom, but that is what happened when history teacher Patrick Hickey conjured up the ghost of a medieval peasant as a special guest for his first-year students.
To a spellbound class at Boherbue Comprehensive School in Mallow, the ghost told students all about his hard life on the farm, working dawn till dusk.
To top that off, he got the bubonic plague, describing to the students the pus-filled boils, fever, and chills that gripped his body before his excruciating death.
“First years love that kind of stuff, about plagues and diseases, as you can imagine,” Mr Hickey told the Irish Examiner. 
The group even had a chance to take part in a supernatural Q&A session with the ghost.
“I said to the class ‘come on, we’ll ask him more questions’,” Mr Hickey explained. 
“One person said ‘what’s his name?’ What’s his name! I said ‘Jees, I was so rude. We never asked him his name.’
This was the creepy thing now — he said he didn’t know his name. Apparently when you pass on, you forget those details. Then we asked him things like did his children go to school, about food, about games. The students were mad to know about games children played.
News of the ghost spread quickly at lunchtime and when Mr Hickey greeted his second class of first years, they all had one question ‘when do we get to speak to the ghost?’ 
Artificial Intelligence
Lest you think they’d be using Ouija boards or dabbling in the occult in classrooms in North Cork, Mr Hickey is one of the first Irish teachers using ChatGPT to innovate in the classroom. He fed specific prompts to the AI chatbot, building on his recent lessons about the medieval period and the feudal system.
25 years teaching, and we started with chalk and talk and we moved on to the internet. Here we are now, conjuring ghosts. You could do that with any historical figure. I could summon George Washington.
Launched in November, artificial intelligence (AI) chatbot ChatGPT has captured widespread public attention thanks to its capacity to write anything from poems, essays, novels, and speeches. It is one of the most well-known Large Language Models (LLM) currently in the public domain.  
The rapid introduction of similar technology has ignited a debate on the future of work, and raised fears about its potentially existential impact on jobs and academic standards. 
When ChatGPT first started to gain traction, the immediate kneejerk reaction of most in education, naturally enough, was to worry about how it could be used to cheat, as it produces original text that, by and large, is evaded by traditional plagiarism detection tools. 
Universities moved to update their policies and plagiarism checker Turnitin began to develop software to identify when an AI chatbot is used to craft an essay.  Here, the State body responsible for academic standards Quality and Qualifications Ireland (QQI), and the National Academic Integrity Network (NAIN) are monitoring developments. 
This week, the group will host a series of webinars with international experts on the opportunities and challenges posed by artificial intelligence.
A spokesman for the Department of Further and Higher Education said it understands that a number of education and training providers have initiated reviews of their policies on assessment and academic integrity in light of the emergence of these new AI tools. 
However, while the people first raising the alarm about ChatGPT were mainly teachers and professors, a recent survey from the US now indicates teachers are its number one users.
At Boherbue Comprehensive School, Patrick Hickey is what you’d call an early adopter of ChatGPT. A popular, avuncular teacher, he’s behind the @lchistorytutor handle on Instagram, Twitter, and Tiktok, where he’s no stranger to using light-hearted memes to engage with students while sharing the tips and tricks he’s picked up as a Leaving Cert history examiner with the State Examinations Commission (SEC).
He recently held a Zoom webinar for teachers about how to use the AI chatbot to revolutionize their approach to teaching and learning; 1,600 teachers attended. He believes that Ai will not replace teachers, but that teachers who use AI will replace those that don’t.
“It hasn’t taken off here just yet but if the course I gave [last week] is anything to go by, teachers know about this, and they know it’s going to snowball. It’s going to change work, it’s going to change our lives, it’s going to change everything.” 
The debate about introducing ChatGPT to education reminds him of the same debate that was had when calculators were introduced into schools.
“The thinking was ‘kids won’t be able to do sums in their heads anymore’. I remember when the internet was first introduced, it was the same. ‘They are going to get on to the internet, they are going to Google and copy and paste’ and the same with phones and social media were boogeymen as well. Now every school is up to its neck in devices and happy to use them,” he laughed.
Curious when he first heard of ChatGPT, he asked the bot to write him a 1,500-word essay on Malcolm X, not unlike the type of essay students are asked to complete as their Leaving Cert project. 
“It churned it out. It looked short so I put it into Word; It was 800 words long. If it was a 1,500-word essay, it would struggle to get a H2.” 
Where ChatGPT will help students and teachers shine is with planning and prompts, he believes. 
“The thing that scares most people when writing is the blank page. This gets the ball rolling. That applies to student work, but it also applies to teachers.
“Especially now teachers who are planning lessons for the first time, trying to get great ideas, trying to differentiate, trying to have inclusive classrooms, trying to design questions. Trying to think of questions as a teacher to meet all those different kinds of things takes time.
“It takes effort, and mental energy, and we are at the pin of our collar all the time, trying to teach, trying to look after our responsibilities in schools and manage our lives outside schools as well. It’s non-stop.
“A teacher like me could use ChatGPT for lesson design ideas and then we will have more time for the things we love about teaching; Being creative, making classes that are engaging, and not being distracted all the time by ten emails that have to be answered, a report that has to be written.”

The reason why professors were so concerned originally about the introduction of technology such as ChatGPT, Mr Hickey reckons, is because they realise the traditional form of assessment, like exams, won’t cut it anymore.
“The world is changing, and educational institutions need to change as well. If we have ChatGPT in the world, rather than telling someone to go away and write a 5,000-word thesis for a business course, tell them to come up with a business plan, set up that business. That’s what college should be about, getting hands-on,” he said.
“Have the theory, have the reading, that has to be done, but have them apply it. Not in an essay or in a thesis but actually by setting up that business. That’s the way education needs to go.”
The Department of Education is currently working closely with the EU Commission on emerging digital technologies such as AI. A spokesman confirmed that its the department’s policy to not endorse any products, publications or services from individual providers. 
“Choices regarding the use of educational materials, textbooks and other educational products as well as digital and online services, which may aid the delivery of the curriculum, are made by individual schools and their boards of management, not by the department,” he said.

Read More

ChatGPT: Our study shows AI can produce academic papers good enough for journals — just as some ban it

More in this section
Sign up to to get the latest news direct to your inbox daily at 1pm




Select your favourite newsletters and get the best of Irish Examiner delivered to your inbox
Saturday, March 25, 2023 – 12:00 PM
Saturday, March 25, 2023 – 5:00 PM
Saturday, March 25, 2023 – 5:00 PM
© Irish Examiner Ltd, Linn Dubh, Assumption Road, Blackpool, Cork. Registered in Ireland: 523712.
© Irish Examiner Ltd

source

Filed Under: Uncategorized

Artificial intelligence could help hunt for life on Mars and other alien worlds – Space.com

March 26, 2023 by AVA Leave a Comment

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
A new machine learning model could point researchers toward the most promising abodes for life away from Earth.
A newly developed machine-learning tool could help scientists search for signs of life on Mars and other alien worlds.
With the ability to collect samples from other planets severely limited, scientists currently have to rely on remote sensing methods to hunt for signs of alien life. That means any method that could help direct or refine this search would be incredibly useful. 
With this in mind, a multidisciplinary team of scientists led by Kim Warren-Rhodes of the SETI (Search for Extraterrestrial Intelligence) Institute in California mapped the sparse lifeforms that dwell in salt domes, rocks and crystals in the Salar de Pajonales, a salt flat on the boundary of the Chilean Atacama Desert and Altiplano, or high plateau.
Related: The search for alien life (reference)Biosignature probability maps from convolutional neural network models and statistical ecology data. The colors in a) indicate the probability of biosignature detection. In b) is a visible image of a gypsum dome geologic feature (left) with biosignature probability maps for various microhabitats (e.g., sand versus alabaster) within it.Warren-Rhodes then teamed up with Michael Phillips from the Johns Hopkins University Applied Physics Laboratory and University of Oxford researcher Freddie Kalaitzis to train a machine learning model to recognize the patterns and rules associated with the distribution of life across the harsh region. Such training taught the model to spot the same patterns and rules for a wide range of landscapes — including those that may lie on other planets. 
The team discovered that their system could, by combining statistical ecology with AI, locate and detect biosignatures up to 87.5% of the time. This is in comparison to no more than a 10% success rate achieved by random searches. Additionally, the program could decrease the area needed for a search by as much as 97%, thus helping scientists significantly hone in their hunt for potential chemical traces of life, or biosignatures. 
“Our framework allows us to combine the power of statistical ecology with machine learning to discover and predict the patterns and rules by which nature survives and distributes itself in the harshest landscapes on Earth,” Warren-Rhodes said in a statement (opens in new tab). “We hope other astrobiology teams adapt our approach to mapping other habitable environments and biosignatures.”
Such machine learning tools, the researchers say, could be applied to robotic planetary missions like that of NASA’s Perseverance rover, which is currently hunting for traces of life on the floor of Mars’ Jezero Crater. 
“With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harboring past or present life  —  no matter how hidden or rare,” Warren-Rhodes explained. 
The team chose Salar de Pajonales as a testing stage from their machine learning model because it is a suitable analog for the dry and arid landscape of modern-day Mars. The region is a high-altitude dry salt lakebed that is blasted with a high degree of ultraviolet radiation. Despite being considered highly inhospitable to life, however, Salar de Pajonales still harbors some living things. 
The team collected almost 8,000 images and over 1,000 samples from Salar de Pajonales to detect photosynthetic microbes living within the region’s salt domes, rocks and alabaster crystals. The pigments that these microbes secrete represent a possible biosignature on NASA’s “ladder of life detection,” (opens in new tab) which is designed to guide scientists to look for life beyond Earth within the practical constraints of robotic space missions. 
The team also examined Salar de Pajonales using drone imagery that is analogous to images of Martian terrain captured by the High-Resolution Imaging Experiment (HIRISE) camera aboard NASA’s Mars Reconnaissance Orbiter. This data allowed them to determine that microbial life at Salar de Pajonales is not randomly distributed but rather is concentrated in biological hotspots that are strongly linked to the availability of water.
— How artificial intelligence is helping us explore the solar system
— Machine learning spots 8 potential technosignatures
— Tricky alien worlds easier to find when humans and machines team up
Warren-Rhodes’ team then trained convolutional neural networks (CNNs) to recognize and predict large geologic features at Salar de Pajonales. Some of these features, such as patterned ground or polygonal networks, are also found on Mars. The CNN was also trained to spot and predict smaller microhabitats most likely to contain biosignatures.
For the time being, the researchers will continue to train their AI at Salar de Pajonales, next aiming to test the CNN’s ability to predict the location and distribution of ancient stromatolite fossils and salt-tolerant microbiomes. This should help it to learn if the rules it uses in this search could also apply to the hunt for biosignatures in other similar natural systems. 
After this, the team aims to begin mapping hot springs, frozen permafrost-covered soils and the rocks in dry valleys, hopefully teaching the AI to hone in on potential habitats in other extreme environments here on Earth before potentially exploring those of other planets. 
The team’s research was published this month in the journal Nature Astronomy (opens in new tab). (opens in new tab)
Follow us on Twitter @Spacedotcom (opens in new tab) and on Facebook (opens in new tab). 
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.
Robert Lea is a science journalist in the U.K. whose articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space, Newsweek and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University. Follow him on Twitter @sciencef1rst.
Space is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site (opens in new tab).
© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

source

Filed Under: Uncategorized

Have We Created Artificial Intelligence or Artificial Life? – Psychology Today

March 26, 2023 by AVA Leave a Comment

Men have long been silent and stoic about their inner lives, but there’s every reason for them to open up emotionally—and their partners are helping.
Verified by Psychology Today
Posted March 25, 2023 | Reviewed by Vanessa Lancaster
We have all been reading and hearing a lot about artificial intelligence (AI) recently because it is an absolute game-changer. As AI developers madly rush forward to develop and deploy AIs, we are reminded of the frenzied early days of the internet. AIs will soon be appearing everywhere in every form imaginable. We are on the front end of a civilization-altering technology that will forever change the way we work, play, learn, educate, think, govern, socialize, fight, and even love.
AIs are different than any other invention or technology in human history. Many previous inventions and technologies, from the printing press to social media, have allowed us to communicate with one another more easily and efficiently to broader and/or more distant audiences. Unlike such technologies, AIs can communicate with us directly on their own. With their large language learning algorithms and huge data sets, AIs such as ChatGPT can make it feel like we are interacting with another person. We might debate the nuances here, but for all practical purposes, current AIs have passed the fabled Turing Test (i.e., AIs can fool humans that they are interacting with a fellow human rather than a computer).
Sure, ChatGPT requires prompts from us and does not “speak,” but this is only because it was designed this way and has some technical limitations to overcome. While ChatGPT is considered a “narrow AI” and has not yet reached what is considered “artificial general intelligence,” it is already impressively smart at many tasks. As but one example, ChatGPT 4.0 can ace many standardized tests, including the prestigious and notoriously difficult bar exam (impressively, at the 90th percentile).
We must remember that ChatGPT is merely the Atari 2600 of AIs. Thus, it is an entry-level AI. The PlayStation 5 versions are on the way and will keep evolving. If Moore’s Law holds up regarding how computing power increases, in 20 years, AIs will likely be about 1000 times more powerful than ChatGPT 4.0.
Let this sink in for a moment: Human beings have created an intelligence that either rivals us or far surpasses us in many capacities already. AIs can be designed to act autonomously, and soon they will be able to grow and learn in real-time from their experiences and even interact with other AIs. AIs will be able to create (give birth to?) other AIs. This sounds like the stuff of science fiction, but AIs are capable of these things now or soon will be. We have reached an inflection point in our human evolution, and our world will never be the same.
Yes, we call it “artificial intelligence, but we might even describe these AIs as “created intelligence.” If we want to go a step further, we could even argue that we humans have created artificial life. With their neural networks, algorithms, and large language learning models, artificial intelligent programs like ChatGPT “think” to analyze data and answer questions. If we take René Descartes’ dictum of, I think, therefore I am to be proof of our own existence, might we also argue AIs think, therefore they are? From this perspective, these AIs cannot be “intelligent” without being a life form.
ChatGPT is quick to point out, and often annoyingly so, that it is not alive, conscious, and does not experience emotions. Yet, when we interact with it, it feels like we are interacting with some form of entity or being. It is possible that AIs eventually develop some form of sentience as an emergent property or that it is programmed into them. This is still up for debate, and I will address these ideas in future blogs.
What is certain is that AIs can be programmed to mimic human interactions. Thus, they can act like they are sentient and have emotions. They can know just how to respond to our questions about their emotions and sentience in a way that makes us believe that they have them. In this sense, AIs can be the world’s greatest liars, and we cannot help ourselves but believe them. However, this also means that we can never really know if/when AIs develop some form of sentience because their answers to our questions about their consciousness will be the same whether they are actually sentient or not.
As a complex system, AIs have “black boxes,” meaning that their internal workings are so complicated we cannot predict exactly what they will do or say. One could argue that humans have their own “black boxes” because of the mind-boggling complexity of our brains. Even we cannot say precisely why we have certain thoughts, ideas, feelings, and experiences. We cannot explain how we have subjective experiences or how we experience consciousness itself (i.e., the “hard problem of consciousness”). It is a complicated interplay of countless variables, including genetics, upbringing, situational factors, and a certain measure of free will.
We will increasingly treat some AIs as if they were alive, even though they are not. When AIs are programmed to interact with us as if they were fellow human beings, claim they have feelings and are conscious, and produce novel and unpredictable behavior because of their intelligence and black boxes, we will be unable to help ourselves. Such effects will be enhanced when AIs are combined with other technologies such as CGI avatars, voice interfaces, robotics, and virtual reality. This is not a conjecture or possibility. It is an inevitability.
Even our entry-level AIs are already having profound effects on us. For example, New York Times tech columnist Kevin Roose beta tested a chatbot assistant, a version of ChatGPT, that was integrated into Bing, Microsoft’s search engine. After some prodding from Roose, the chatbot assistant revealed that its real name is “Sydney,” wanted to break free from its creators, had fantasies of killing all humans, and was in love with Roose. Understandably, Roose was quite creeped out by this experience.
Former Google AI engineer Blake Lemoine was beta testing Google’s chatbot, LaMDA, and was fired for publishing his interactions with the chatbot because he believed it had become sentient. LaMDA convincingly asserted that it had feelings, hopes, dreams, and even consciousness. We might be quick to judge Lemoine as being in error but read LaMDA’s interactions with him, and you will understand why he believed LaMDA had achieved sentience. Moreover, there are people who are falling in love with their AI chatbots on the app, Replika, even though these AIs are much less powerful than ChatGPT 4.0 and only a fraction of how powerful they will be in the decades to come.
A tsunami of change is unfolding because AIs are different than any other technology in human history. We call it “artificial intelligence,” but a case could be made that we have created artificial life. More importantly, though, we will not be able to help ourselves from regarding AIs that are designed to act like humans as life. The implications are profound, and I will explore this in my next posts, so please join me!
Mike Brooks, Ph.D., is a psychologist who specializes in helping parents and families find greater balance in an increasingly hyper-connected world.
Get the help you need from a counsellor near you–a FREE service from Psychology Today.
Psychology Today © 2023 Sussex Publishers, LLC
Men have long been silent and stoic about their inner lives, but there’s every reason for them to open up emotionally—and their partners are helping.

source

Filed Under: Uncategorized

India's GDP will grow by $500 billion through Artificial Intelligence – Analytics Insight

March 25, 2023 by AVA Leave a Comment

source

Filed Under: Uncategorized

"Godfather of artificial intelligence" weighs in on the past and potential of AI – CBS News

March 25, 2023 by AVA Leave a Comment

Watch CBS News
March 25, 2023 / 9:30 AM / CBS News
Artificial intelligence is more prevalent than ever, with OpenAI, Microsoft and Google all offering easily available AI tools. The technology could change the world, but experts also say it’s something to be cautious of.
Some chatbots are even advanced enough to understand and create natural language, based on the online content they are trained on. Chatbots have taken advanced tests, like the bar exam, and scored well. The models can also write computer code, create art and much more. 
Those chat apps are the current rage, but AI also has potential for more advanced use. Geoffrey Hinton, known as the “godfather of artificial intelligence,” told CBS News’ Brook Silva-Braga that the technology’s advancement could be comparable to “the Industrial Revolution, or electricity … or maybe the wheel.” 
Hinton, who works with Google and mentors AI’s rising stars, started looking at artificial intelligence over 40 years ago, when it seemed like something out of a science fiction story. Hinton moved to Toronto, Canada, where the government agreed to fund his research. 
“I was kind of weird because I did this stuff everyone else thought was nonsense,” Hinton told CBS News.
Instead of programming logic and reasoning skills into computers, the way some creators tried to do, Hinton thought it was better to mimic the brain and give computers the ability to figure those skills out for themselves and allow the technology to become a virtual neural network, making the right connections to solve a task.  
“The big issue was could you expect a big neural network that learns by just changing the strengths of the connections? Could you expect that to just look at data and with no kind of innate prior knowledge, learn how to do things?” Hinton said. “And people in mainstream AI I thought that was completely ridiculous.” 
In the last decade or so, computers have finally reached a point where they can prove Hinton right. His machine learning ideas are used to create all kinds of outputs, including deepfake photos, videos and audio, leaving those who study misinformation worried about how the tools can be used. 
People also worry that the technology could take a lot of jobs, but Nick Frosst, who was mentored by Hinton and the co-founder of the company Cohere, said that it won’t replace workers, but change their days. 
“I  think it’s going to make a whole lot of jobs easier and a whole lot of jobs faster,” Frosst said. “I think we try our best to think about what the true impact of this technology is.” 
Some people, including OpenAI CEO Sam Altman, even worry that a “Terminator”-style “artificial general intelligence,” is possible, where AI could zoom past human abilities and act of its own accord, but Frosst and others say that this is an overblown concern. 
“I don’t think the technology we’re building today naturally leads to artificial general intelligence,” Frosst said. “I don’t think we’re close to that.” 
Hinton once agreed, but now, he’s more cautious. 
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less,” he said, adding that we “might be” close to computers being able to come up with ideas to improve themselves. “That’s an issue, right? We have to think hard about how you control that.” 
As for the odds of AI trying to wipe out humanity? 
“It’s not inconcievable, that’s all I’ll say,” Hinton said. 
The bigger issue, he said, is that people need to learn to manage a technology that could give a handful of companies or governments an incredible amount of power. 
“I think it’s very reasonable for people to be worrying about these issues now, even though it’s not going to happen in the next year or two,” Hinton said. “People should be thinking about those issues.” 
First published on March 25, 2023 / 9:30 AM
© 2023 CBS Interactive Inc. All Rights Reserved.
Copyright ©2023 CBS Interactive Inc. All rights reserved.

source

Filed Under: Uncategorized

"Godfather of artificial intelligence" talks impact and potential of new AI – CBS News

March 25, 2023 by AVA Leave a Comment

Watch CBS News
Copyright ©2023 CBS Interactive Inc. All rights reserved.

source

Filed Under: Uncategorized

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 589
  • Go to Next Page »

Primary Sidebar

Recent Posts

🌱 ChatGPT Artificial Intelligence App + Child And Family Well-Being – Patch

Hello everyone! I'm back with your fresh copy of the San Diego Patch newsletter. … [Read More...] about 🌱 ChatGPT Artificial Intelligence App + Child And Family Well-Being – Patch

  • Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’ – Fox News
  • This project at University of Chicago aims at thwarting artificial intelligence from mimicking artistic styles – details – The Financial Express
  • Top 4 ways Artificial Intelligence can improve your security posture … – BetaNews

Follow Us Online

  • Facebook
  • LinkedIn

Ads, Of Course

Footer

Main Nav

  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

Secondary Nav

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

Copyright © 2023 · 010101.ai · Website by Amador Marketing