Posts Tagged ‘AI’

Using AI for homeschool

Posted: March 16, 2024 by navygrade36bureaucrat in opinion/news
Tags: , , , ,

The implosion of public school during COVID created a whole new batch of homeschooling families. While this is great news, it also means more than a few families are discovering the difficulty in homeschooling children, especially when the child has difficulty in subjects the parents aren’t familiar with.

That’s why I encourage all homeschooling parents to use AI. I use CoPilot since its free, but you’re welcome to use OpenAI or any other AI. Now, we aren’t going to try and look up gender studies or DEI subjects, because parents should talk with their children about those topics. But what about math?

Let’s be honest, unless you happen to work in engineering, integrating a function is likely something you haven’t done recently. AI makes this really easy AND it explains the work.

Remember diagramming sentences? I don’t because I’m sure I slept through that portion of school. So what do you do when your kid is confused about diagramming sentences?

Problem solved! But what about foreign languages?

Too bad for CoPilot! You have to have a Microsoft account of some kind to make this work

If it can diagram sentences, it can definitely update your work too!

What about chemistry? Balancing redux equations in high school chemistry is something I haven’t touched in years.

Another great use of AI is technical help. If something doesn’t work correctly on your computer, AI makes it easy to troubleshoot. I had a lot of problems getting rid of the kid’s google accounts from my wife’s laptop. I would go to a website and their account, vice my wife’s, would load and be heavily restricted. AI helped me solve that problem.

Another great school use is Excel functions. Microsoft Excel is extremely powerful, much more so than Google Sheets, but the syntax and formatting can get messy quickly. CoPilot is especially good at taking what you want to do and spitting out a function you can copy/paste. Even something complicated like pivot tables falls to the power of AI!

I think Microsoft captured AI, especially large language learning models, with the phrase “CoPilot.” Yes, AI can generate some pretty humorous poems and the occasional rap song, plus create some very cool images, but human beings are still far better at imagining unique things. Where AI shines is rote work. How many times have you Googled different Excel formulas, or how to integrate a function, or where some setting is in PowerPoint? My kids have tons of weird questions that pop up, ranging from English and Math to Biology and Chemistry. Anything that is straightforward will easily be answered with AI.

One caution: I always encourage people to have a discussion with the AI. Just popping in a question and getting an answer is dangerous, because the AI, like human beings, can get it wrong. This happened on an English question my daughter had. The first answer didn’t make sense, and she was ready to write off AI. I had her put in a few more prompts, and then the AI (in this case, CoPilot) gave her the correct answer. Treat it like a really smart human and you’ll do great!

This post represents the views of the author and not those of the Department of Defense, Department of the Navy, or any other government agency. And besides the pictures, nothing else in this post was generated by AI.

So Google AI Knew Col Lawson was right all along!

Posted: February 22, 2024 by datechguy in culture, tech
Tags: , ,

Col. Tad Lawson: [drunkenly] There are no Nazis in Germany. Didn’t you know that, Judge? The Eskimos invaded Germany and took over. That’s how all those terrible things happened. It wasn’t the fault of the Germans. It was the fault of those damn Eskimos!

Sometimes we let our myths get ahead of us.

Decades of TV and Movies depicting computers and AI as something sentient, capable of independent thought or even beyond humans in terms of humanity or right and wrong has corrupted public perception in the same way that 60 years of Disney movies full of talking animals has made some people forget that animals are dangerous and if given the chance will kill and eat you.

Thus we see the shock SHOCK that Google’s AI seems to be incapable of showing a white person thus the Vikings, the founding fathers and all the popes become people of color.

As far as the AI is concerning Col Lawson was right. Given the choice between showing the Nazi’s of 1944 as Caucasians or Eskimos they’ll give them that Inuit look every time.

Why? For the same reason that it doesn’t dare show an image of Tiananmen Square…Because it’s a computer program doing what it’s been ordered to do by the people who wrote it.

The reality that Google AI is just a computer program, a complex computer program, with access to all kinds of data, but a computer program that operators under the parameters that the programmers have given it and if people understood that this is what it is then all of this stuff would be no big deal.

But folks have been sold by language and culture to convince you that it’s something it’s not.

Just like “The cloud” People upload all kinds of things to “The Cloud” not thinking that what they’re doing is uploading things to large server farms run by various companies.

If people thought of their data being freely to these folks they might think twice, but because it’s called “the cloud” that thought doesn’t hit them.

Anyone who trusts AI to do anything other than what it’s been told by the people who programmed it and thus the people who paid the people who programmed it are fools.

Don’t be a fool, no matter what Google Gemini tells you, it wasn’t the Eskimos.

I have avoided using Artificial Intelligence since it first appeared on the scene a couple years beacuse it has a pronounced leftist bias built into it.  This was done organically by the high tech companies that invented this technology.

The leftist bias built into AI is about to get much worse because of an Executive Order issued by Joe Biden’s handlers: FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House

Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

Like all Leftist proposals, this proposal sounds benign on the surface.  It is not until this section that the Marxist nature of this executive order becomes apparent.

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions:

  • Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
  • Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
  • Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

This article provides excellent analysis of this monstrosity: Biden Signs Executive Order Forcing Tech Companies to Program Marxist Ideology into AI – Slay News

buried in the order is a provision that stipulates that AI must help to advance the leftist agenda.  The order mandates that AI must be programmed with a foundation based on “equity.”  Not to be confused with “equality,” the Left’s “equity” agenda is based on Marxism.

Whereas “equality” teaches that everybody should be treated equally, no matter their race or sex, “equity” asserts that people should be treated differently depending on their skin color, “gender identity,” or mental state.  Under the “equity” ideology, for example, a transgender would be given advantages, such as securing a college scholarship or being offered a job, over a normal person.

Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.

One of two prototypes purchased by the Office of the Secretary of Defense’s Strategic Capabilities Office for its Ghost Fleet Overlord program, aimed at fielding an autonomous surface ship capable of launching missiles. (U.S. Defense Department)

Military drones are popping up everywhere. In Afghanistan and Iraq, we became used to seeing Predator drones flying around with Hellfire missiles, flown from bases in the United States and providing a near 24/7 watch for opportunities to blow up terrorists. The latest batch of drones are now becoming increasingly autonomous, meaning they can not just think for themselves, but react faster than a human and respond to an ever changing environment. In the news recently was how Artificial Intelligence that beat a top US Air Force F-16 pilot, and previously the Navy discussed how its Sea Hunter would operate as an autonomous missile barge.

But I’m not here to talk about technology, not only because details are classified, but also because any technological issues will solve themselves over time. Human engineers are pretty smart. If some piece of code doesn’t work, we’ll find a solution. Technology isn’t holding us back in the realm of military drones. People are, and unfortunately people are the real weakness, as emphasized in this quote:

“AI matters because using drones as ‘loyal wingmen’ is a key part of future air power developments,” said Teal Group analyst Richard Aboulafia via email. “It’s less important as a fighter pilot replacement.”

If we build an AI that is smarter, faster and all around better than top notch fighter pilots, why on earth would we not replace pilots? The Army just raised the minimum contract for pilots to 10 years, which in military human resources speak means that they can’t keep these people in. All the military services struggle to retain people with skills like flying, electronic warfare, cyber, and anything else that requires significant technical expertise. Using AI to fill these billets gives the military significantly more flexibility in where it sends its manpower. This manpower can be used to lead squadrons of drone aircraft, or on people who lead armies of online bots in cyberspace. It’ll require more training and expertise, and certainly a culture change in how we view people in the military.

Besides being short sighted about replacing people, the other weakness we are going to find with autonomous systems is that we do a terrible job writing out our intentions. I worked with some highly skills folks on the Navy’s autonomous sea systems, and one of the biggest challenges was turning what we call “Commanders Intent” into code. If a vessel is out looking for an enemy, its easy to say “Kill this type of enemy when you see them.” It’s harder to give instructions like “Taking the current geopolitical events into consideration, make a judgement call on whether to shoot down an adversary aircraft.”

To put it bluntly, what does that even mean? The military throws around the idea of “Commanders Intent” like its some sort of magic that springs forth from someone’s brain. In reality, its a lot of processing happening in the back of your mind that constantly takes in data from the world around you. The military benefits from having extraordinary people that stick around long enough to reach command. These extraordinary people find ways to take an ugly bureaucracy devoted toward mediocrity and somehow make it work. As our military bureaucracy has grown, this has gotten more difficult. Extraordinary people are less likely to stick around to fight a bureaucracy devoted to maintaining status quo, especially when business is happy to snap them up and pay them more. Autonomous systems give us a chance to drop much of the bureaucracy and focus on intent, strategy and “end state,” or what we want the world to look like at the end. If we don’t embrace this change, we’re missing out on the truly revolutionary changes that autonomy gives us.

Future warfare is going to feature autonomous systems, and its going to highlight how weak human beings are in a variety of areas. Rather than fight this, the military should embrace autonomous systems as a chance to recapitalize manpower. It should also begin training its future commanders, flag and general officers, about how to actually write out their intent, and stop relying on chance to give us great commanders. We can’t let a military bureaucracy devoted to maintaining a status quo on manpower stifle the massive innovation that AI offers us.

This post represents the views of the author and not the views of the Department of Defense, Department of the Navy, or any other government agency.