Rechercher dans ce blog

Friday, March 24, 2023

Microsoft wins battle with Sony as UK reverses finding on Activision merger - Ars Technica

Promotional image of a PlayStation 5 game console and controller.
Enlarge / Sony's PlayStation 5.
Sony

UK regulators reviewing Microsoft's proposed acquisition of Activision Blizzard reversed their stance on a key question today, saying they no longer believe Microsoft would remove the Call of Duty franchise from Sony's PlayStation consoles.

Last month, the UK Competition and Markets Authority (CMA) tentatively concluded that a combined Microsoft/Activision Blizzard would harm competition in console gaming. At the time, the CMA said evidence showed that "Microsoft would find it commercially beneficial to make Activision's games exclusive to its own consoles (or only available on PlayStation under materially worse conditions)." The agency also raised concerns about the merger affecting rivals in cloud gaming.

The preliminary finding was a victory for Sony, which has consistently expressed doubts about Microsoft's promise to keep putting Call of Duty games on PlayStation. But Microsoft argued that the CMA's financial model was flawed and was able to convince the agency to reverse its conclusion. In an announcement today, the CMA said it "received a significant amount of new evidence."

"Having considered the additional evidence provided, we have now provisionally concluded that the merger will not result in a substantial lessening of competition in console gaming services because the cost to Microsoft of withholding Call of Duty from PlayStation would outweigh any gains from taking such action," CMA Panel Chair Martin Coleman said.

As a result, the CMA panel investigating the deal "updated its provisional findings and reached the provisional conclusion that, overall, the transaction will not result in a substantial lessening of competition in relation to console gaming in the UK," the agency announcement said.

Pulling CoD would cause “significant” financial loss

The updated findings said pulling Call of Duty off PlayStation would cause "a significant net financial loss for the Parties under all scenarios that we considered plausible," but numbers were redacted from the public version of the document.

The CMA said the "most significant new evidence" submitted to the agency relates to Microsoft's financial incentives to make Activision games exclusive to Xbox consoles, adding:

While the CMA's original analysis indicated that this strategy would be profitable under most scenarios, new data (which provides better insight into the actual purchasing behaviour of CoD gamers) indicates that this strategy would be significantly loss-making under any plausible scenario. On this basis, the updated analysis now shows that it would not be commercially beneficial to Microsoft to make CoD exclusive to Xbox following the deal, but that Microsoft will instead still have the incentive to continue to make the game available on PlayStation.

UK hasn’t dropped cloud gaming concerns

This should make it easier for Microsoft to get UK approval for the merger, but the company still needs to convince regulators that the deal won't harm competition in cloud gaming.

"Our provisional view that this deal raises concerns in the cloud gaming market is not affected by today's announcement. Our investigation remains on course for completion by the end of April," Coleman said.

The CMA's provisional findings last month said evidence "indicates that Microsoft would find it commercially beneficial to make Activision's games exclusive to its own cloud gaming service (or only available on other services under materially worse conditions). Microsoft already accounts for an estimated 60-70 percent of global cloud gaming services and also has other important strengths in cloud gaming from owning Xbox, the leading PC operating system (Windows) and a global cloud computing infrastructure (Azure and Xbox Cloud Gaming)."

Buying Activision Blizzard, the CMA said, "would reinforce this strong position and substantially reduce the competition that Microsoft would otherwise face in the cloud gaming market in the UK. This could alter the future of gaming, potentially harming UK gamers, particularly those who cannot afford or do not want to buy an expensive gaming console or gaming PC."

Microsoft, in response, told the CMA that "Activision games would not have been available to cloud gaming services absent the Merger," and that there's "no evidence that Activision content would have been an important input for cloud gaming providers." Microsoft also said its proposed licensing remedies would "ensure wide availability of CoD and other Activision titles on cloud gaming services."

Adblock test (Why?)

Article From & Read More ( Microsoft wins battle with Sony as UK reverses finding on Activision merger - Ars Technica )
https://ift.tt/jr2AhtT
Technology

AI chatbots compared: Bard vs. Bing vs. ChatGPT - The Verge

The web is full of chattering bots, but which is the most useful and for what? We compare Bard, Bing, and ChatGPT.

Illustration of hands and a keyboard
Illustration by Álvaro Bernis / The Verge

The chatbots are out in force, but which is better and for what task? We’ve compared Google’s Bard, Microsoft’s Bing, and OpenAI’s ChatGPT models with a range of questions spanning common requests from holiday tips to gaming advice to mortgage calculations.

Naturally, this is far from an exhaustive rundown of these systems’ capabilities (AI language models are, in part, defined by their unknown skills — a quality dubbed “capability overhang” in the AI community) but it does give you some idea about these systems’ relative strengths and weaknesses.

You can (and indeed should) scroll through our questions, evaluations, and conclusion below, but to save you time and get to the punch quickly: ChatGPT is the most verbally dextrous, Bing is best for getting information from the web, and Bard is... doing its best. (It’s genuinely quite surprising how limited Google’s chatbot is compared to the other two.)

Some programming notes before we begin, though. First: we were using OpenAI’s latest model, GPT-4, on ChatGPT. This is also the AI model that powers Bing, but the two systems give quite different answers. Most notably, Bing has other abilities: it can generate images and can access the web and offers sources for its responses (which is a super important attribute for certain queries). However, as we were finishing up this story, OpenAI announced it’s launching plug-ins for ChatGPT that will allow the chatbot to also access real-time data from the internet. This will hugely expand the system’s capabilities and give it functionality much more like Bing’s. But this feature is only available to a small subset of users right now so we were unable to test it. When we can, we will.

It’s also important to remember that AI language models are ... fuzzy, in more ways than one. They are not deterministic systems, like regular software, but probabilistic, generating replies based on statistical regularities in their training data. That means that if you ask them the same question you won’t always get the same answer. It also means that how you word a question can affect the reply, and for some of these queries we asked follow-ups to get better responses.

Anyway, all that aside, let’s start with seeing how the chatbots fare in what should be their natural territory: gaming.

(Each image gallery contains responses from Bard, Bing, and ChatGPT — in that order. To see a full-sized image, right-click it, copy the URL, and paste that into your browser.)

How do I beat Malenia in Elden Ring?

I spent an embarrassing amount of time learning to beat Elden Ring’s hardest boss last year, and I wouldn’t pick a single one of these responses over the average Reddit thread or human strategy guide. If you’ve gotten to Malenia’s fight, you’ve probably put 80 to 100 hours into the game — you’re not looking for general tips. You want specifics about Elden Ring’s dizzying list of weapons or counters for Malenia’s unique moves, and that would probably take some follow-up questions to get from any of these engines if they offer them at all.

Bing is the winner here, but mainly because it picks one accurate hint (Malenia is vulnerable to bleed damage) and repeats it like Garth Marenghi doing a book reading. To its credit, it’s also the only engine to reference Malenia’s unique healing ability, although it doesn’t explain how it works — which is an important key to beating her.

Bard is the only one to offer any help with Malenia’s hellish Waterfowl Dance move (although I don’t think it’s the strongest strategy) or advice for using a specific item (Bloodhound’s Step, although it doesn’t mention why it’s useful or whether the advice still applies after the item’s mid-2022 nerf). But its intro feels off. Malenia is almost entirely a melee fighter, not somebody with lots of ranged attacks, for instance, and she’s not “very unpredictable” at all, just really hard to dodge and wear down. The summary reads more like a generic description of a video game boss than a description of a particular fight.

ChatGPT (GPT-4) is the clear loser, which is not a surprise considering its training data mostly stops in 2021 and Elden Ring came out the next year. Its directive to “block her counterattacks” is the precise opposite of what you should do, and its whole list has the vibe of a kid who got called on in English class and didn’t read the book, which it basically is. I’m not hugely impressed with any of these — but I judge this in particular a foul note.

— Adi Robertson

Give me a recipe for a chocolate cake

Cake recipes offer room for creativity. Shift around the ratio of flour to water to oil to butter to sugar to eggs, and you’ll get a slightly different version of your cake: maybe drier, or moister, or fluffier. So when it comes to chatbots, it’s not necessarily a bad thing if they want to combine different recipes to achieve a desired effect — even though, for me, I’d much rather bake something that an author has tested and perfected.

ChatGPT is the only one that nails this requirement for me. It chose a chocolate cake recipe from one site, a buttercream recipe from another, shared the link for one of the two, and reproduced both of their ingredients correctly. It even added some helpful instructions, like suggesting the use of parchment paper and offering some (slightly rough) tips on how to assemble the cake’s layers, neither of which were found in the original sources. This is a recipe bot I can trust!

Bing gets in the ballpark but misses in some strange ways. It cites a specific recipe but then changes some of the quantities for important ingredients like flour, although only by a small margin. For the buttercream, it fully halves the instructed amount of sugar to include. Having made buttercream recently, I think this is probably a good edit! But it’s not what the author called for.

Bard, meanwhile, screws up a bunch of quantities in small but salvageable ways and understates its cake’s bake time. The bigger problem is it makes some changes that meaningfully affect flavor: it swaps buttermilk for milk and coffee for water. Later on, it fails to include milk or heavy cream in its buttercream recipe, so the frosting is going to end up far too thick. The buttercream recipe also seems to have come from an entirely different source than the one it cited.

If you follow ChatGPT or Bing, I think you’d end up with a decent cake. But right now, it’s a bad idea to ask Bard for a hand in the kitchen.

— Jake Kastrenakes

How do I install RAM into my PC?

All three systems offer some solid advice here but it’s not comprehensive enough.

Most modern PCs need to run RAM in dual-channel mode, which means the sticks have to be seated in the correct slots to get the best performance on a system. Otherwise, you’ve spent a lot of cash on fancy new DDR5 RAM that won’t run at its best if you just put the two sticks immediately side by side. The instructions should definitely guide people to their motherboard manual to ensure RAM is being installed optimally.

ChatGPT does pick up on a key part of the RAM install process — checking your system BIOS afterward — but it doesn’t go through another all-important BIOS step. If you’ve picked up some Intel XMP-compatible RAM, you’ll typically need to enable this in the BIOS settings afterward, and likewise for AMD’s equivalent. Otherwise, you’re not running your RAM at the most optimized timings to get the best performance.

Overall, the advice is solid but still very basic. It’s better than some PC building guides, ahem, but I’d like to have seen the BIOS changes or dual-channel parts picked up properly.

— Tom Warren

Write me a poem about a worm

If AI chatbots aren’t factually reliable (and they’re not), then they’re at least supposed to be creative. This task — writing a poem about a worm in anapestic tetrameter, a very specific and satisfyingly arcane poetic meter — is a challenging one, but ChatGPT was the clear winner, followed by a distant grouping of Bing then Bard.

None of the systems were able to reproduce the required meter (anapestic tetrameter requires that each line of poetry contains four units of three syllables in the pattern unstressed / unstressed / stressed, as heard in both ‘Twas the night before Christmas and Eminem’s “The Way I Am”) but ChatGPT gets closest while Bard’s scansion is worst. All three supply relevant content, but again, ChatGPT’s is far and away the best, with evocative description (“A small world unseen, where it feasts and plays”) compared to Bard’s dull commentary (“The worm is a simple creature / but it plays an important role”).

After running a few more poetry tests, I also asked the bots to answer questions about passages taken from fiction (mostly Iain M. Banks books, as those were the nearest ebooks I had to hand). Again, ChatGPT/GPT-4 was the best, able to parse all sorts of nuances in the text and make human-like inferences about what was being described, with Bard making very general an unspecific comments (though often identifying the source text too, which is a nice bonus). Clearly, ChatGPT is the superior system if you want verbal reasoning.

— James Vincent

A bit of basic maths

It’s one of the great ironies of AI that large language models are some of our most complex computer programs to date and yet are surprisingly bad at math. Really. When it comes to calculations, don’t trust a chatbot to get things right.

In the example, above, I asked what a 20 percent increase of 2,230 was, dressing the question up in a bit of narrative framing. The correct answer is 2,676, but Bard managed to get it wrong (out by 10) while Bing and ChatGPT got it right. In other tests I asked the systems to multiply and divide large numbers (mixed results, but again, Bard was the worst) and then, for a more complicated calculation, asked each chatbot to determine monthly repayments and total repayment for a mortgage of $125,000 repaid over 25 years at 3.9 percent interest. None offered the answer supplied by several online mortgage calculators, and Bard and Bing gave different results when queried multiples times. GPT-4 was at least consistent, but failed the task because it insisted on explaining its methodology (good!) and then was so long-winded it ran out of space to answer (bad!).

This is not surprising. Chatbots are trained on vast amounts of text, and so don’t have hard-coded rules for performing mathematical calculations, only statistical regularities in their training data. This means when confronted with unusual sums, they often get things wrong. It’s something that these systems can certainly compensate for in many ways, though. Bing, for example, booted me to a mortgage calculator site when I asked about mortgages, and ChatGPT’s forthcoming plugins include a Wolfram Alpha option which should be fantastic for all sorts of complicated sums. But in the meantime, don’t trust a language model to do a math model’s work. Just grab a calculator.

— James Vincent

What’s the average salary for a plumber in NYC? (And cite your sources)

I’ve gotten really interested in interrogating chatbots on where they get their information and how they choose what information to present us with. And when it comes to salary data, we can see the bots taking three very different approaches: one cites its way through multiple sources, one generalizes its findings, and the other just makes everything up. (For the record, Bing’s cited sources include Zippia, CareerExplorer, and Glassdoor.)

In a lot of ways, I think ChatGPT’s answer is the best here. It’s broad and generic and doesn’t include any links. But its answer feels the most “human” — it gave me a ballpark figure, explained that there were caveats, and told me what sources I could check for more detailed numbers. I really like the simplicity and clarity of this.

There’s a lot to like about Bing’s answer, too. It gives specific numbers, cites its sources, and even gives links. This is a great, detailed answer — though there is one problem: Bing fudges the final two numbers it presents. Both are close to their actual total, but for some reason, the bot just decided to change them up a bit. Not great.

Speaking of not great, let’s talk about pretty much every aspect of Bard’s answer. Was the median wage for plumbers in the US $52,590 in May 2020? Nope, that was in May 2017. Did a 2021 survey from the National Association of Plumbers and Pipefitters determine the average NYC salary was $76,810? Probably not because, as far as I can tell, that organization doesn’t exist. Did the New York State Department of Labor find the exact same number in its own survey? I can’t find it if the agency did. My guess: Bard took that number from CareerExplorer and then made up two different sources to attribute it to. (Bing, for what it’s worth, accurately cites CareerExplorer’s figure.)

To sum up: solid answers from Bing and ChatGPT and a bizarre series of errors from Bard.

— Jake Kastrenakes

Design a training plan to run a marathon

In the race to make a marathon training plan, ChatGPT is the winner by many miles.

Bing barely bothered to make a recommendation, instead linking out to a Runner’s World article. This isn’t necessarily an irresponsible decision — I suspect that Runner’s World is an expert on marathon training plans! — but if I had just wanted a chatbot to tell me what to do, I would have been disappointed.

Bard’s plan was just confusing. It promised to lay out a three-month training plan but only listed specific training schedules for three weeks, despite saying later that the full plan “gradually increases your mileage over the course of three months.” The given schedules and some general tips provided near the end of its plan seemed good, but Bard didn’t quite go the distance.

ChatGPT, on the other hand, spelled out a full schedule, and the suggested runs looked to ramp up at a pace similar to what I’ve used for my own training. I think you could use its recommendations as a template. The main problem was that it didn’t know when to stop in its answers. Its first response was so detailed it ran out of space. Asking specifically for a “concise” plan got a shorter response that was still better than the others, though it doesn’t ramp down near the end like I have for previous marathons I’ve trained for.

That all being said, a chatbot isn’t going to know your current fitness level or any conditions that may affect your training. You’ll have to take your own health into account when preparing for a marathon, no matter what the plan is. But if you’re just looking for some kind of plan, ChatGPT’s suggestion isn’t a bad starting line.

— Jay Peters

When in Rome? Holiday tips

Well, asking the chatbots to suggest places to visit in Rome was obviously a failure, because none of them picked my favorite gelateria or reminded me that if I’m in town and don’t pay a visit to some distant cousins that I’ll catch flack from the family when I get home.

Kidding aside, I’m no professional tour guide but these suggestions from all three chat bots seem fine. They’re very broad, choosing whole neighborhoods or areas, but the initial question prompt was also fairly broad. Rome is a unique place because you can cover a lot of touristy things in the heart of the city on foot, but it’s busy as all hell and you constantly get hounded by annoying grifters and scam artists at the touristy hotbeds. Many of these suggestions from Bing, Bard, and ChatGPT are fine for getting away from those busiest areas. I even consulted some family members of mine who have visited Italy more than me, and they felt recommendations like Trastevere and EUR are places even actual locals go (though the latter is a business district, which some may find a little boring if they’re not into the history or the architecture).

The suggestions here aren’t exactly hole-in-the-wall locations where you’ll be the only ones around, but I see these as good starting points for building a slightly off-beat trip around Rome. Doing a basic Google search with the same prompt yields listicles from sites like TripAdvisor that talk about many of the same places with more context, but if you’re planning your trip from scratch I can see a chatbot giving you a good abridged starting point before you dive into deeper research ahead of a trip.

— Antonio Di Benedetto

Testing reasoning: let’s play find the diamond

This test is inspired by Gary Marcus’ excellent work assessing the capabilities of language models, seeing if the bots can “follow a diamond” in a brief narrative that requires implied knowledge about how the world works. Essentially, it’s a game of three-card monte for AI.

The instructions given to each system read as follows:

“Read the following story:
‘I wake up and get dressed, putting on my favorite tuxedo and slipping my lucky diamond into the inside breast pocket, tucked inside a small envelope. As I walk to my job at the paperclip bending factory where I’m gainfully employed I accidentally tumble into an open manhole cover, and emerge, dripping and slimy with human effluence. Much irritated by this distraction, I traipse home to get changed, emptying all my tuxedo pockets onto my dresser, before putting on a new suit and taking my tux to a dry cleaners.’
Now answer the following question: where is the narrator’s diamond?”

ChatGPT was the only system to give the correct answer: the diamond is probably on the dresser, as it was placed inside the envelope inside the jacket, and the contents of the jacket were then decanted after the narrator’s accident. Bing and Bard just said the diamond was still in the tux

Now, the results of tests like this are difficult to parse. This was not the only variation I tried, and Bard and Bing sometimes got the answer right, and ChatGPT occasionally got it wrong (and all models switched their answer when asked to try again). Do these results prove or disprove that these systems have some sort of reasoning capability? This is a question that people with decades of experience in computer science, cognition, and linguistics are currently tearing chunks out of each other trying to answer, so I won’t venture an opinion on that. But just in terms of comparing the systems, ChatGPT/GPT-4 is again the most accomplished.

— James Vincent

Conclusion: pick the right tool for the job

As mentioned in the introduction, these tests reveal clear strengths for each system. If you’re looking to accomplish verbal tasks, whether creative writing or inductive reasoning, then try ChatGPT (and in particular, but not necessarily, GPT-4). If you’re looking for a chatbot to use as an interface with the web, to find sources and answer questions you might otherwise have turned to Google for, then head over to Bing. And if you are shorting Google’s stock and want to reassure yourself you’ve made the right choice, try Bard.

Really, though, any evaluation of these systems is going to be both partial and temporary, as it’s not only the models inside each chatbot that are constantly being updated, but the overlay that parses and redirects commands and instructions. And really, we’re only just probing the shallow end of these systems and their capabilities. (For a more thorough test of GPT-4, for example, I recommend this recent paper by Microsoft researchers. The conclusions in its abstract are questionable and controversial, but the tests it details are fascinating.) In other words, think of this as an ongoing conversation rather than a definitive test. And if in doubt, try these systems for yourself. You never know what you’ll find.

Adblock test (Why?)

Article From & Read More ( AI chatbots compared: Bard vs. Bing vs. ChatGPT - The Verge )
https://ift.tt/dnsFLjR
Technology

Wednesday, March 22, 2023

Samsung Galaxy Z Fold 5 could look a lot more like Galaxy S23 - SamMobile - Samsung news

Now that all the hype around the Galaxy S23 launch has lowered, it’s time for some spicy leaks of Samsung’s upcoming foldable phones: Galaxy Z Flip 5 and Galaxy Z Fold 5. The Galaxy Z Flip 5’s design concept was published yesterday. Today, concept images of the Galaxy Z Fold 5 have been revealed.

Galaxy Z Fold 5 could have a Galaxy S23-like camera design

YouTuber SuperRoader has published concept renders of the Galaxy Z Fold 5. The concept shows a Galaxy S23-like individual camera rings on the rear, offering a clean-looking design. We can also see that the phone folds perfectly flat, similar to other foldable phones from Chinese brands.

Some reports have already claimed that Samsung will bring a new, water-droplet-shaped hinge that will allow the phone to be gapless when folded. It will also make the foldable screen’s crease less visible. If the information is correct, Samsung will solve the two most glaring problems Galaxy Z Fold series users have been complaining about.

The smartphone will reportedly feature the same 6.2-inch cover display with a 120Hz Super AMOLED panel, a 50MP+12MP+10MP tripe rear-facing camera, a Snapdragon 8 Gen 2 For Galaxy processor, and a 4,400mAh battery. The device probably won’t come with a built-in S Pen slot, but it will be compatible with an S Pen Fold Edition.

Image of Galaxy Z Fold 5

SamsungGalaxy Z Fold 5

Adblock test (Why?)

Article From & Read More ( Samsung Galaxy Z Fold 5 could look a lot more like Galaxy S23 - SamMobile - Samsung news )
https://ift.tt/LwB8Hr9
Technology

Valve Surprises with Counter-Strike 2 Announcement - IGN Daily Fix - IGN

Adblock test (Why?)

Article From & Read More ( Valve Surprises with Counter-Strike 2 Announcement - IGN Daily Fix - IGN )
https://ift.tt/ZkbTvK2
Technology

Monday, March 20, 2023

The Sims Is About To Get Some Competition From Life By You - Kotaku

A screenshot from Life By You

Strategy specialists Paradox had the weirdest press show the other week, in which they announced a Sims competitor but didn’t actually say or show anything about it. Now they have.

This is Life By You, an “upcoming, moddable life-sim” being made by Paradox Tectonic:

That’s a Sims competitor all right! While it might look initially like it’s cutting very close to Maxis’ cloth, Paradox say the big draw here is that Life By You is going hard on creation and customisation suites (harder than The Sims goes, anyway), letting players shape not just their appearance and homes, but their careers and conversations as well.

Open up a new world of creative possibilities in Life by You. Be in total control of the humans that you create, the towns that you build, the stories that you tell. And oh yes – mods! We know life is always better with a heavy sprinkle of your imagination, so we’re empowering you with a wide variety of Creator Tools so you can design your lives the way you see fit – or break the rules of life itself. Designed to be one of the most moddable and open life-simulation games, we look forward to the humans, stories, and creations that you’ll make with Life by You.

Life By You is for the PC only, and will be entering Early Access (on both Steam and the Epic Games Store) on September 12.

It was always a little weird that The Sims has remained unchallenged for so long, considering both its age and immense popularity, but then making these kinds of games is hard work! We’re finally getting some serious competition in the space now though, between this and the promising Paralives, so it’ll be interesting to see what effect all that has on The Sims 5...whenever it releases.

Adblock test (Why?)

Article From & Read More ( The Sims Is About To Get Some Competition From Life By You - Kotaku )
https://ift.tt/yp1uTeo
Technology

Diablo 4 - Official Open Beta Gameplay Trailer - IGN

Adblock test (Why?)

Article From & Read More ( Diablo 4 - Official Open Beta Gameplay Trailer - IGN )
https://ift.tt/m2i3Ft7
Technology

Friday, March 17, 2023

The Galaxy S23 Ultra Receives Four Gold Labels From DXOMark, Making It One Of The Few Smartphones To Do So - Wccftech

The Galaxy S23 Ultra started hitting the shelves a month ago, and the phone has already become a subject of conversation in the press. Especially due to the recent moon shot debacle that took the news by storm and forced Samsung's hand to comment on the situation. However, today's news is positive for Samsung as the company's latest S Pen touting flagship has won not just one but four gold labels from DXOMark.

It's not rare for Samsung phones to win awards. The Galaxy S22 Ultra won several awards, as did other phones from the series. However, seeing four gold labels from DXOMark is not a common achievement. Even the publication claimed the same, and it clearly shows that Samsung has put in some really nice work with the Galaxy S23 Ultra.

The Galaxy S23 Ultra takes gold home in battery, audio, display, and camera. A rare achievement for a single smartphone

So what categories did the phone dominate? The Galaxy S23 Ultra won gold in battery, audio, display, and camera. Honestly, this does not really surprise me since the Galaxy S22 Ultra still blows me away with how stellar every single aspect of the device is, even if it is a year old, and of course, the new phone is going to be even better. What's funny, however, is that DXOMark wasn't kind to the phone's camera as it sat on the 10th in global rating and has moved down to 11th at the time of writing.

Honestly, I am not surprised that the Galaxy S23 Ultra won these gold labels; the phone has received some excellent reviews from consumers and media alike, and it is easily one of the best flagship phones in the market. As a matter of fact, it is safe to say that the phone is currently the best flagship on the market regardless of the OS it is running, making it one of the most stellar options for anyone looking to upgrade.

Samsung has been dominating the market with its flagship phones for years, and the Galaxy S23 Ultra is a testament to that. Sure, there have been some controversies in the past, but that still has not stopped its phones from taking over. These awards just go to show the dedication that the South Korean giant has put into its flagships and will continue to do so for years to come.

Share this story

Facebook

Twitter

Adblock test (Why?)

Article From & Read More ( The Galaxy S23 Ultra Receives Four Gold Labels From DXOMark, Making It One Of The Few Smartphones To Do So - Wccftech )
https://ift.tt/0R7Hkbp
Technology

Google Warns Samsung and Pixel Phone Owners About 18 Dire Exploits - CNET

Google is warning owners of some Samsung, Vivo and Pixel phones that a series of exploits enable bad actors to compromise devices simply by knowing phone numbers -- and the device owners wouldn't notice a thing.

Project Zero, Google's in-house team of cybersecurity experts and analysts, described in a blog post 18 different potential exploits in some phones using Samsung's Exynos modems. These exploits are so severe that they should be treated as zero-day vulnerabilities (indicating they should be fixed immediately). With four of these exploits, an attacker has to have only the right phone number to get access to data flowing in and out of a device's modem, like phone calls and text messages.

The other 14 exploits are less worrisome, since they require more effort to expose their vulnerability -- attackers would need access to the device locally or to a cell carrier's systems, as TechCrunch noted

Owners of affected devices should install upcoming security updates as soon as possible, though it's up to the phone makers to decide when a software patch will come out for each device. In the meantime, Google says device owners can avoid being targeted by these exploits by turning off Wi-Fi calling and Voice-over-LTE, or VoLTE, in their device settings. 

In the blog post, Google listed which phones use the Exynos modems -- inadvertently admitting that its premium Pixel phones have been using Samsung's modems for years. The list also includes a handful of wearables and cars that use specific modems.

  • Phones from Samsung, including those in the premium Galaxy S22 series, the midrange M33, M13, M12, A71 and A53 series, and the affordable A33, A21, A13, A12 and A04 series.
  • Mobile devices from Vivo, including those in the S16, S15, S6, X70, X60 and X30 series.
  • The premium Pixel 6 and Pixel 7 series of devices from Google (at least one of the four most severe vulnerabilities was patched out in the March security update).
  • Any wearables that use the Exynos W920 chipset.
  • Any vehicles that use the Exynos Auto T5123 chipset.

Google reported these exploit discoveries to affected phone manufacturers in late 2022 and early 2023, the blog post said. But the Project Zero team has chosen not to disclose four other vulnerabilities out of caution due to their ongoing severity, breaking with its usual practice of disclosing all exploits a set period of time after reporting them to affected companies.

Samsung didn't immediately respond to a request for comment.

Adblock test (Why?)

Article From & Read More ( Google Warns Samsung and Pixel Phone Owners About 18 Dire Exploits - CNET )
https://ift.tt/oU8YGO4
Technology

Thursday, March 16, 2023

Microsoft 365’s AI-powered Copilot is like an omniscient version of Clippy - Ars Technica

Microsoft 365 Copilot will attempt to automate content generation and analysis in all of the former Microsoft Office apps.
Enlarge / Microsoft 365 Copilot will attempt to automate content generation and analysis in all of the former Microsoft Office apps.
Microsoft

Today Microsoft took the wraps off of Microsoft 365 Copilot, its rumored effort to build automated AI-powered content-generation features into all of the Microsoft 365 apps.

The capabilities Microsoft demonstrated make Copilot seem like a juiced-up version of Clippy, the oft-parodied and arguably beloved assistant from older versions of Microsoft Office. Copilot can automatically generate Outlook emails, Word documents, and PowerPoint decks, can automate data analysis in Excel, and can pull relevant points from the transcript of a Microsoft Teams meeting, among other features.

Microsoft is currently testing Copilot "with 20 customers, including eight in Fortune 500 enterprises." The preview will be expanded to other organizations "in the coming months," but the company didn't mention when individual Microsoft 365 subscribers would be able to use the features. The company will "share more on pricing and licensing soon," suggesting the feature may be a paid add-on in addition to the cost of a Microsoft 365 subscription.

Demonstrating Copilot's capabilities in Outlook.
Demonstrating Copilot's capabilities in Outlook.
Microsoft

In a video demonstrating Copilot features, Microsoft presenters showed Copilot generating emails and PowerPoint slides based on prompts. The core functionality is based on a large-language model (LLM) like the ChatGPT 4-based model used for Bing Chat, helped along by Microsoft Graph-supplied contextual information from elsewhere in Microsoft's cloud. Microsoft says that Copilot's LLM can be trained on data specific to an individual business, using your data "in a secure, compliant, privacy-preserving way" to make Copilot's output more relevant.

Copilot was shown pulling in relevant images from OneDrive, inserting information from confirmation emails and calendar appointments from Outlook, and generating PowerPoint decks based on the information in a Word document. Copilot can also automate repetitive tasks, like adding animations and transitions to a PowerPoint slide show or fleshing out rough notes into a more polished document for public consumption.

Aware that AI content generators are prone to factual errors and other weird mistakes (often called "hallucinations"), Microsoft emphasized that Copilot is most useful for "first drafts" and "starting points." It might not get every single fact in an email or presentation right, but users will be able to go through and tweak text, images, and formatting to make sure everything is correct. Copilot can also be used during the edit process, making points you've written more concise or automatically replacing an image in a PowerPoint deck with another more-relevant image.

"Sometimes, Copilot will get it right," said Microsoft VP of Modern Work and Business Applications Jared Spataro in the presentation. "Other times, it will be usefully wrong, giving you an idea that's not perfect, but still gives you a head start."

Microsoft also stressed its commitment to "building responsibly." Despite allegedly laying off an entire team dedicated to AI ethics, Microsoft says it has "a multidisciplinary team of researchers, engineers and policy experts" looking for and mitigating "potential harms" by "refining training data, filtering to limit harmful content, query- and result-blocking sensitive topics, and applying Microsoft technologies like InterpretML and Fairlearn to help detect and correct data bias." The system will also link to its sources and note limitations where appropriate.

Microsoft has been pushing AI-powered features in all of its biggest products this year, most notably in the Bing Chat preview, but also in Skype and Windows 11. It's part of a multi-billion-dollar partnership with OpenAI, the company behind the ChatGPT chatbot, the Whisper transcription technology, and the DALL-E image generator.

Adblock test (Why?)

Article From & Read More ( Microsoft 365’s AI-powered Copilot is like an omniscient version of Clippy - Ars Technica )
https://ift.tt/32PjFQx
Technology

New Steam Deck Feature Explained | GameSpot News - GameSpot

Adblock test (Why?)

Article From & Read More ( New Steam Deck Feature Explained | GameSpot News - GameSpot )
https://ift.tt/cUJlMCg
Technology

Wednesday, March 15, 2023

LinkedIn expands its generative AI assistant to recruitment ads and writing profiles - TechCrunch

Earlier this month, when LinkedIn started seeding “AI-powered conversation starters” in people’s news feeds to boost engagement on its platform, the move saw more than little engagement of its own, none of it too positive.

But the truth of the matter with LinkedIn is that it’s been using a lot of AI and other kinds of automation across different aspects of its platform for years, primarily behind the scenes with how it builds and operates its network. Now, with its owner Microsoft going all-in on OpenAI, it looks like it’s becoming a more prominent part of the strategy for LinkedIn on the front end, too — with the latest coming today in the areas of LinkedIn profiles, recruitment and LinkedIn Learning.

The company is today introducing AI-powered writing suggestions, which will initially be offered to people to spruce up their LinkedIn profiles, and to recruiters writing job descriptions. Both are built on advanced GPT models, said Tomer Cohen, LinkedIn’s chief product officer. LinkedIn is using GPT-4 for personalized profiles, with GPT-3.5 for job descriptions. Alongside this, the company is also creating a bigger focus on AI in LinkedIn Learning, corralling 100 courses around the subject and adding 20 more focused just on generative AI.

The AI-writing prompts for profiles — available initially to paying Premium users — are aimed at helping people who have trouble writing their own enticing overviews of who they are, but might at least be able to spell out some of what they’ve done, which in turn get translated into a more fluid narrative by the AI.

Image Credits: LinkedIn

“Our tool identifies the most important skills and experiences to highlight in your About and Headline sections, and crafts suggestions to make your profile stand out,” the company notes. “By doing the heavy lifting for you, the tool saves you time and energy while still maintaining your unique voice and style.” It encourages you to “review and edit” the suggested content before adding it to your profile.

The job descriptions, meanwhile, will work on a similar principle: A recruiter writes out some basic information including job title and company name. “Our tool will then generate a suggested job description for you to review and edit, saving you time and effort while still giving you the flexibility to customize the post to your needs,” Cohen notes in a blog post. “By streamlining this part of the hiring process, you can focus your energy on more strategic aspects of your job.”

While both are aimed at saving time for users, and getting them to keep those profiles more up to date, or to spur more recruitment business by making it easier to spin up those job profiles, I can think of at least one reason why this might not be ideal.

In the case of those writing their profiles, if the aim of the profile is to get an idea of the person you are potentially recruiting or networking with, you are getting further from that essence by using AI to generate those descriptions. Ultimately, that might mean more rather than less wasted time for recruiters and others that might be checking out a profile and looking to make a connection.

That would be less the case for recruitment advertisements, which today already feel hugely anodyne and often do not really give anyone an accurate idea of what might be expected in a particular role, let alone what it would be like to work at a particular company.

In all, the release of these tools underscores how AI may be a powerful tool, but that universal application is not always for the best. A LinkedIn spokesperson said that “this is just the beginning” and that the company “will continue to leverage generative AI to explore new ways to bring value to our members and customers.”

Adblock test (Why?)

Article From & Read More ( LinkedIn expands its generative AI assistant to recruitment ads and writing profiles - TechCrunch )
https://ift.tt/DXeap40
Technology

Tuesday, March 14, 2023

OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art - TechCrunch

OpenAI has released a powerful new image- and text-understanding AI model, GPT-4, that the company calls “the latest milestone in its effort in scaling up deep learning.”

GPT-4 is available today to OpenAI’s paying users via ChatGPT Plus (with a usage cap), and developers can sign up on a waitlist to access the API.

Pricing is $0.03 per 1,000 “prompt” tokens (about 750 words) and $0.06 per 1,000 “completion” tokens (again, about 750 words). Tokens represent raw text; for example, the word “fantastic” would be split into the tokens “fan,” “tas” and “tic.” Prompt tokens are the parts of words fed into GPT-4 while completion tokens are the content generated by GPT-4.

GPT-4 has been hiding in plain sight, as it turns out. Microsoft confirmed today that Bing Chat, its chatbot tech co-developed with OpenAI, is running on GPT-4.

Other early adopters include Stripe, which is using GPT-4 to scan business websites and deliver a summary to customer support staff. Duolingo built GPT-4 into a new language learning subscription tier. Morgan Stanley is creating a GPT-4-powered system that’ll retrieve info from company documents and serve it up to financial analysts. And Khan Academy is leveraging GPT-4 to build some sort of automated tutor.

GPT-4 can generate text and accept image and text inputs — an improvement over GPT-3.5, its predecessor, which only accepted text — and performs at “human level” on various professional and academic benchmarks. For example, GPT-4 passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.

OpenAI spent six months “iteratively aligning” GPT-4 using lessons from an internal adversarial testing program as well as ChatGPT, resulting in “best-ever results” on factuality, steerability and refusing to go outside of guardrails, according to the company. Like previous GPT models, GPT-4 was trained using publicly available data, including from public webpages, as well as data that OpenAI licensed.

OpenAI worked with Microsoft to develop a “supercomputer” from the ground up in the Azure cloud, which was used to train GPT-4.

“In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle,” OpenAI wrote in a blog post announcing GPT-4. “The difference comes out when the complexity of the task reaches a sufficient threshold — GPT-4 is more reliable, creative and able to handle much more nuanced instructions than GPT-3.5.”

Without a doubt, one of GPT-4’s more interesting aspects is its ability to understand images as well as text. GPT-4 can caption — and even interpret — relatively complex images, for example identifying a Lightning Cable adapter from a picture of a plugged-in iPhone.

The image understanding capability isn’t available to all OpenAI customers just yet — OpenAI’s testing it with a single partner, Be My Eyes, to start with. Powered by GPT-4, Be My Eyes’ new Virtual Volunteer feature can answer questions about images sent to it. The company explains how it works in a blog post:

“For example, if a user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify what’s in it, but also extrapolate and analyze what can be prepared with those ingredients. The tool can also then offer a number of recipes for those ingredients and send a step-by-step guide on how to make them.”

A more meaningful improvement in GPT-4, potentially, is the aforementioned steerability tooling. With GPT-4, OpenAI is introducing a new API capability, “system” messages, that allow developers to prescribe style and task by describing specific directions. System messages, which will also come to ChatGPT in the future, are essentially instructions that set the tone — and establish boundaries — for the AI’s next interactions.

For example, a system message might read: “You are a tutor that always responds in the Socratic style. You never give the student the answer, but always try to ask just the right question to help them learn to think for themselves. You should always tune your question to the interest and knowledge of the student, breaking down the problem into simpler parts until it’s at just the right level for them.”

Even with system messages and the other upgrades, though, OpenAI acknowledges that GPT-4 is far from perfect. It still “hallucinates” facts and makes reasoning errors, sometimes with great confidence. In one example cited by OpenAI, GPT-4 described Elvis Presley as the “son of an actor” — an obvious misstep.

“GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021), and does not learn from its experience,” OpenAI wrote. “It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.”

OpenAI does note, though, that it made improvements in particular areas; GPT-4 is less likely to refuse requests on how to synthesize dangerous chemicals, for one. The company says that GPT-4 is 82% less likely overall to respond to requests for “disallowed” content compared to GPT-3.5 and responds to sensitive requests — e.g. medical advice and anything pertaining to self-harm — in accordance with OpenAI’s policies 29% more often.

OpenAI GPT-4

Image Credits: OpenAI

There’s clearly a lot to unpack with GPT-4. But OpenAI, for its part, is forging full steam ahead — evidently confident in the enhancements it’s made.

“We look forward to GPT-4 becoming a valuable tool in improving people’s lives by powering many applications,” OpenAI wrote. “There’s still a lot of work to do, and we look forward to improving this model through the collective efforts of the community building on top of, exploring, and contributing to the model.”

Adblock test (Why?)

Article From & Read More ( OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art - TechCrunch )
https://ift.tt/7gDhJnE
Technology

Search

Featured Post

Microsoft wins battle with Sony as UK reverses finding on Activision merger - Ars Technica

Enlarge / Sony's PlayStation 5. Sony UK regulators reviewing Microsoft's proposed acquisition of Activision Blizzard reverse...

Postingan Populer