AI Good and Bad

If you don’t know about Palantir you should study up.
This coked up nut job runs the company.
Our government is deeply embedding their AI into every corner of our government





 
  • Like
  • Haha
Reactions: mat200 and sdkid
30 minutes video that shows how an ever-growing AI develops its own free will and plots its own "escape".... and then....

EDIT: This video gets darker and darker as it goes, showing how people think they know what is happening, and the reality would be AI is pulling all the strings and shaping the perceptions of people while plotting the REPLACEMENT of people. WATCH ALL OF IT!


 
Last edited:
Pay attention .. Police reports generated by AI .. what could go wrong ?

 


Last quarter I rolled out Microsoft Copilot to 4,000 employees.

$30 per seat per month.$1.4 million annually.

I called it "digital transformation."

The board loved that phrase.They approved it in eleven minutes.

No one asked what it would actually do.Including me.

I told everyone it would "10x productivity."That's not a real number. But it sounds like one.

HR asked how we'd measure the 10x.

I said we'd "leverage analytics dashboards."

They stopped asking.

Three months later I checked the usage reports.

47 people had opened it.12 had used it more than once.

One of them was me.

I used it to summarize an email I could have read in 30 seconds.

It took 45 seconds.Plus the time it took to fix the hallucinations.

But I called it a "pilot success."Success means the pilot didn't visibly fail.

The CFO asked about ROI. I showed him a graph.

The graph went up and to the right.

It measured "AI enablement."I made that metric up.

He nodded approvingly.We're "AI-enabled" now.

I don't know what that means. But it's in our investor deck.

A senior developer asked why we didn't use Claude or ChatGPT.I said we needed "enterprise-grade security.

"He asked what that meant.I said "compliance."He asked which compliance.

I said "all of them."He looked skeptical.

I scheduled him for a "career development conversation."He stopped asking questions.

Microsoft sent a case study team.They wanted to feature us as a success story.

I told them we "saved 40,000 hours."I calculated that number by multiplying employees by a number I made up.

They didn't verify it.They never do.

Now we're on Microsoft's website."Global enterprise achieves 40,000 hours of productivity gains with Copilot."

The CEO shared it on LinkedIn.He got 3,000 likes.

He's never used Copilot.None of the executives have.We have an exemption."Strategic focus requires minimal digital distraction."I wrote that policy.

The licenses renew next month.I'm requesting an expansion.5,000 more seats.

We haven't used the first 4,000.

But this time we'll "drive adoption."Adoption means mandatory training.

Training means a 45-minute webinar no one watches.But completion will be tracked.

Completion is a metric.Metrics go in dashboards.Dashboards go in board presentations.Board presentations get me promoted.I'll be SVP by Q3.

I still don't know what Copilot does.But I know what it's for.

It's for showing we're "investing in AI."

Investment means spending.Spending means commitment.Commitment means we're serious about the future.

The future is whatever I say it is.


As long as the graph goes up and to the right.
 
Last edited:


Last quarter I rolled out Microsoft Copilot to 4,000 employees.

$30 per seat per month.$1.4 million annually.

I called it "digital transformation."

The board loved that phrase.They approved it in eleven minutes.

No one asked what it would actually do.Including me.

I told everyone it would "10x productivity."That's not a real number. But it sounds like one.

HR asked how we'd measure the 10x.

I said we'd "leverage analytics dashboards."

They stopped asking.

Three months later I checked the usage reports.

47 people had opened it.12 had used it more than once.

One of them was me.

I used it to summarize an email I could have read in 30 seconds.

It took 45 seconds.Plus the time it took to fix the hallucinations.

But I called it a "pilot success."Success means the pilot didn't visibly fail.

The CFO asked about ROI. I showed him a graph.

The graph went up and to the right.

It measured "AI enablement."I made that metric up.

He nodded approvingly.We're "AI-enabled" now.

I don't know what that means. But it's in our investor deck.

A senior developer asked why we didn't use Claude or ChatGPT.I said we needed "enterprise-grade security.

"He asked what that meant.I said "compliance."He asked which compliance.

I said "all of them."He looked skeptical.

I scheduled him for a "career development conversation."He stopped asking questions.

Microsoft sent a case study team.They wanted to feature us as a success story.

I told them we "saved 40,000 hours."I calculated that number by multiplying employees by a number I made up.

They didn't verify it.They never do.

Now we're on Microsoft's website."Global enterprise achieves 40,000 hours of productivity gains with Copilot."

The CEO shared it on LinkedIn.He got 3,000 likes.

He's never used Copilot.None of the executives have.We have an exemption."Strategic focus requires minimal digital distraction."I wrote that policy.

The licenses renew next month.I'm requesting an expansion.5,000 more seats.

We haven't used the first 4,000.

But this time we'll "drive adoption."Adoption means mandatory training.

Training means a 45-minute webinar no one watches.But completion will be tracked.

Completion is a metric.Metrics go in dashboards.Dashboards go in board presentations.Board presentations get me promoted.I'll be SVP by Q3.

I still don't know what Copilot does.But I know what it's for.

It's for showing we're "investing in AI."

Investment means spending.Spending means commitment.Commitment means we're serious about the future.

The future is whatever I say it is.


As long as the graph goes up and to the right.


and to make certain our earnings per share are up, I fired the most expensive tech employees so we can pay for that AI costs without losing profits. I have no idea of what they really did anyways, so it must not have been important. Can't wait for my stock bonus this year.
 
California enacted a law on AI and Real Estate images ..

off hand, seems like a good idea to force AI and photoshopped images to be clearly marked within the image ..

 
California enacted a law on AI and Real Estate images ..

off hand, seems like a good idea to force AI and photoshopped images to be clearly marked within the image ..

I agree that this sounds good. I'm skeptical because it's from California, and I have CDS (california derangement syndrome). I've been clueless about the current manipulations mentioned, like street light and utility pole removal, sky swaps, and all the others. It sometimes feels like almost everything we see is manipulated to trick us out of our money.
 
  • Like
Reactions: mat200 and jrbeddow
There have been times when I honestly couldn't tell if someone online was real or just an AI bot, especially lately with how good AI-generated content is getting. I think it makes a lot of people pretty uneasy, because it's tough to know who you can trust and what's legit. For stuff like financial apps or even event tickets, I’d rather something exists to prove I’m talking to a real person and not some script. I actually saw a thing about how this is being handled in the world with these new systems for verifying humans online. Supposedly you can even do it for free in a lot of places, which sounds like the direction all this is heading anyway.
 
Last edited:
  • Like
Reactions: mat200
There have been times when I honestly couldn't tell if someone online was real or just an AI bot, especially lately with how good AI-generated content is getting. I think it makes a lot of people pretty uneasy, because it's tough to know who you can trust and what's legit. For stuff like financial apps or even event tickets, I’d rather something exists to prove I’m talking to a real person and not some script.

Welcome Tjang,

Indeed, how do we know who is legit ?

Especially new users with their first post ?

Perhaps you would like to share a bit about your hometown in Bima ?
 
AI Deepfake videos already showing up as videos in court ..


1766593629298.png

 
  • Sad
Reactions: bigredfish

The Real War Of The Century: Artificial Intelligence​



AI systems are deterministic by construction. They operate through statistical inference, optimization, and probability. Even when their outputs surprise us, they remain bound by mathematical constraints. Nothing in these systems resembles judgment, interpretation, or understanding in the human sense.

AI does not deliberate.

It does not reflect.

It does not bear responsibility for outcomes.

Yet increasingly, its outputs are treated not as tools, but as decisions. This is the quiet revolution of our time.

The appeal is obvious. Institutions have always struggled with human variability. People are inconsistent, emotional, slow, and sometimes disobedient. Bureaucracies prefer predictability, and algorithms promise exactly that: standardized decisions at scale, immune to fatigue and dissent.

In healthcare, algorithms promise more efficient triage. In finance, better risk assessment. In education, objective evaluation. In public policy, “evidence-based” governance. In content moderation, neutrality. Who could object to systems that claim to remove bias and optimize outcomes? But beneath this promise lies a fundamental confusion.

Prediction is not judgment.

Optimization is not wisdom.

Consistency is not legitimacy.


Human decision-making has never been purely computational. It is interpretive by nature. People weigh context, meaning, consequence, and moral intuition. They draw on memory, experience, and a sense—however imperfect—of responsibility for what follows. This is precisely what institutions find inconvenient.

Human judgment introduces friction. It requires explanation. It exposes decision-makers to blame. Deterministic systems, by contrast, offer something far more attractive: decisions without decision-makers.

When an algorithm denies a loan, flags a citizen, deprioritizes a patient, or suppresses speech, no one appears responsible. The system did it. The data spoke. The model decided.

Determinism becomes a bureaucratic alibi.


----------
.....Systems designed to predict are now positioned to decide. Probabilities harden into policies. Risk scores become verdicts. Recommendations quietly turn into mandates. Once embedded, these systems are difficult to challenge. After all, who argues with “The science?”

----------
....... The real danger of AI is not runaway intelligence or sentient machines. It is the slow erosion of human responsibility under the banner of efficiency.


...more >>>>
 
  • Like
Reactions: johnfitz and mat200
Someone got bored and started creating Behind-the-Scenes clips for Home Alone - looks too real :facepalm:
 
Personally, I'm sick of AI pic's and video.

And there is probably more truth to the 'One guy/gal that gets it' treatment then we actually know....

A senior developer asked why we didn't use Claude or ChatGPT.I said we needed "enterprise-grade security.

"He asked what that meant.I said "compliance."He asked which compliance.

I said "all of them."He looked skeptical.

I scheduled him for a "career development conversation."He stopped asking questions.
 
Personally, I'm sick of AI pic's and video.

And there is probably more truth to the 'One guy/gal that gets it' treatment then we actually know....

A senior developer asked why we didn't use Claude or ChatGPT.I said we needed "enterprise-grade security.

"He asked what that meant.I said "compliance."He asked which compliance.

I said "all of them."He looked skeptical.


I scheduled him for a "career development conversation."He stopped asking questions.
Even AI girls look fake :(