FOX 13’s Lloyd Sowers shows us the event in Tampa showcasing emerging technology in use on the battlefield.
TAMPA – At the Tampa Convention Center, the trade show floor is set up for an international event like no other.
“If you look around this show, there are all these different sensors, things that fly, things that shoot,” says Tampa native and former Army Green Beret Paul Greaves.
The trade show is part of SOF Week. SOF stands for Special Operations Forces. They are small units of highly trained military forces from all branches that range from Delta Force to Navy SEALs.
For hostage rescues and other dangerous Special Operations missions, they increasingly rely on the most advanced technology for a view of the battlefield and to control drones, remotely operated guns, and weapons and surveillance systems.
“If you look around, it’s pretty much the whole show is about technologically linked equipment,” says Greaves, who now works for a company called Persistent Systems, which provides technology systems for the military.
Many drones are on display. They’re becoming preferred weapons on new battlefields in a kind of technological chess match.
“It’s constantly evolving back and forth. We just try to stay ahead of them with our capabilities,” says Justin Litko, of Blue Halo.
That company provides electric powered water drones that can run 500 miles, either on the surface or underwater.
“A scenario would be if you’re trying to get a drone a far distance and there is only water between you and where you’re trying to take the drone.”
READ: Military Appreciation Month 2024 deals, discounts
The sea drone can carry an air drone and launch it with little fear of detection.
Tampa is a natural location for the event. MacDill Air Force Base houses US Special Operations Command (SOCOM), which is in command of US Special Operations units from all branches of the military around the world.
Wednesday is the annual special operations demonstration at the Convention Center that simulates a special operation with troops dropped in by helicopters and automatic weapons firing blanks. It’s sure to be loud.
But the modern warfighter really depends on silent technology in the battles we see even now. Battles where drone warfare — and seeing the battle in real time — is war in the 21st century.
SIGN UP: Click here to sign up for the FOX 13 daily newsletter
Alphabet’s Google is considering charging for premium features on its generative AI-powered search engine, the Financial Times reported on Wednesday, citing people familiar with the plan.
The tech giant is looking at a variety of options, including incorporating AI-powered search features to its premium subscription services, which already provides access to its new Gemini AI assistant in Gmail and Docs, the report said.
Alphabet’s shares dipped about 1% in extended trade.
The move would mark Google’s first time in putting any of its core products behind a paywall, as it seeks to gain ground in the fast-moving AI space. Its traditional search engine would remain free of charge and ads would continue to appear alongside search results even for subscribers, the report added.
“We’re not working on or considering an ad-free search experience. “As we’ve done many times before, we’ll continue to build new premium capabilities and services to enhance our subscription offerings across Google,” the company told Reuters in an emailed statement.
Google, which invented the foundational technology for today’s AI boom, is also locked in battle with two industry players that have captured the business world’s attention – ChatGPT’s creator OpenAI and its backer Microsoft.
Twenty-twenty-three’s technology news has been … a little insane. We’ve seen some incredible tech developments, as well as many approaches to regulatory reform and at least a few tech sagas.
Surely, 2023 must be the year of AI. It’s the year we realized AI stopped being in the future and arrived in our here and now.
The year of AI started with ChatGPT building from 0 to 100 million users after just two months of operation and is now wrapping up with Australia’s ‘AI month’ (by the CSIRO), which ends in mid-December and showcases Australian industry, government and academic AI developments.
There has been a lot in between. We have seen discussions about what AI is and isn’t and witnessed immense hyperbole about what it will mean for society, including questioning the very future of humanity.
This year saw numerous efforts to regulate AI, from the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI to the EU AI Act. Multilateral forums have tried their hands at AI principles and regulations, from the UK-hosted Bletchley Park AISafety Summit to the G7 Leaders’ Statement on the Hiroshima AI Process, while AI was a key topic on the international stage, including ASEAN, the Quad and AUKUS.
Perhaps unsurprisingly, national responses trumped multilateral announcements when it came to depth and detail on AI regulation, including the more substantive US Executive Order on AI. Many nations formally supported an agreement on the Guardrails on Military use of AI, as it was observed on the battlefield in the Russian-Ukrainian Conflict and in the way Israel is targeting Hamas.
Then there was the speed of diffusion. Consumer technologies spread through society at an unprecedented rate. Threads broke the ChatGPT record to become the fastest-growing app ever, gaining over 100 million users in less than a week. Before that, TikTok held the record, reaching the mark in nine months, beating the years and nearly decades it took technologies like the internet, telephone and LinkedIn.
In November, it was announced that ChatGPT now has 100 million weekly active users, and is used by two million developers, including over 90 per cent of Fortune 500 companies. Yet, this announcement was rapidly followed by a chaotic four-day stoush between OpenAI CEO Sam Altman and the board, which I’ve dubbed the Sam Altman Saga.
OpenAI’s board sacked CEO and co-founder Sam Altman, for not being ‘consistently candid.’ co-founder Greg Brockman was removed as chair of the board and resigned from OpenAI. One interim CEO lasted less than 48 hours and another was installed as Microsoft offered Altman and Brockman and their unnamed colleagues a place within the company. More than 90$ of OpenAI employees signed a letter to the board threatening to quit. Days later, Altman was back as CEO.
The entire Sam Altman Saga was reported in real-time on X (formerly Twitter) showcasing that, despite the state propaganda and endless bots, the platform retains some utility.
October 2023 marked a year of Musk ownership of Twitter/X, which resulted in wiping out $4billion-$20 billion in value and committing ‘one of the biggest rebrand failures of all time.’ X was removed from Australia’s voluntary Code of Practice on Disinformation and Misinformation having removed the mechanism for reporting misinformation posted on its platform.
Musk also did a bunch of other crazy shit.
He endorsed an antisemitic conspiracy theory and on November 30 told Disney CEO Bob Iger — and other fleeing advertisers — not to advertise on X and to “Go Fuck Yourself”! The move could see a loss of $75 million in advertising revenue by the end of the year as dozens of major brands pause their marketing campaigns. While his Brain-Computer Interface company, Neuralink, was controversially approved for human trials, Musk ended the year with deliveries of the long-awaited Tesla Cybertruck — albeit with lower range and higher prices than promised.
Cyber security has dominated the Australian landscape, with abundant cyber security incidents as well as the release of the 2023–2030 Australian Cyber Security Strategy with lofty ambitions. A cyber-attack on DP World, Australia’s biggest port operator, halted operations of up to 40 per cent of the nation’s maritime freight, while an Optus outage that left millions of individuals and businesses without resulting connectivity in the CEO’s resignation.
What do we have ahead of us in tech in 2024? As Johanna Weaver and I conclude, 2024 will hopefully be the year of thinking about technology holistically. I anticipate a renewed focus on board governance, especially in the context of AI, cybersecurity and the ethical and privacy considerations arising from the implementation of technology.
In 2023, there has been widespread recognition that there is no AI without Big Tech. With few exceptions, every start-up, new entrant, and even AI research lab is dependent on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and for market reach to deploy and sell their AI products. I suspect then that 2024 will bring renewed vigor in understanding and challenging technological dependence, from a variety of perspectives, including national security.
With more than 50% of the world expected to hold an election in 2024, mis- and dis-information will continue to be a huge challenge. Unfortunately, the outlook for 2024 doesn’t appear great. We are witnessing what a UK author called a shift in our approach to trust in evidence and authority and US authors called the post-truth era. Undeniably, more work is needed to combat the susceptibility of our digital landscape to disinformation and interference, because global efforts to date have not worked.
There’s no doubt 2023 will be the year technology policy really entered the mainstream, and dinner table conversations, thanks to data breaches, cyber(in)security and the hype around AI. In 2024, I’m looking for a maturing of tech conversations to consider issues holistically that center on reducing harms, embracing opportunities, and improving governance mechanisms. The stakes couldn’t be higher. How we decide to embrace opportunities, innovate, fail and recover, learn and improve will shape the decades to come.
READ MORE:
Canberra leaps into AI without a working safety net