Slot Gacor
Facebook Archives ✔️ News For Finance
Home Archive by category Facebook
Two serious men in suits talk amongst themselves.
Enlarge / Chairman Senator Chris Coons (D-Del.) (right) speaks with Sen. Ben Sasse (R-Neb.) during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing April 27, 2021 on Capitol Hill in Washington, DC. The committee is hearing testimony on the effect social media companies’ algorithms and design choices have on users and discourse.

In a Senate Judiciary Committee hearing yesterday, there was a striking change of scenery—rather than grilling the floating heads of Big Tech CEOs, senators instead questioned policy leads from Twitter, Facebook, and YouTube on the role algorithms play in their respective platforms. The panel also heard from two independent experts in the field, and the results were less theatrical and perhaps more substantive.

Both Democrats and Republicans expressed concerns over how algorithms were shaping discourse on social platforms and how those same algorithms can drive users toward ever more extreme content. “Algorithms have great potential for good,” said Sen. Ben Sasse (R-Neb.). “They can also be misused, and we the American people need to be reflective and thoughtful about that.”

The Facebook, YouTube, and Twitter execs all emphasized how their companies’ algorithms can be helpful in achieving shared goals—they are working to find and remove extremist content, for example—though all the execs admitted de-radicalizing social media was a work in progress.

In addition to policy leads from the three social media platforms, the panel heard from a couple of experts. One of them was Joan Donovan, director of the Technology and Social Change Project at Harvard University. She pointed out that the main problem with social media is the way it’s built to reward human interaction. Bad actors on a platform can and often do use this to their advantage. “Misinformation at scale is a feature of social media, not a bug,” she said. “Social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant, and accurate information can reach them.”

One of Donovan’s proposed solutions sounded a lot like community cable TV stations. “We should begin by creating public interest obligations for social media timelines and newsfeeds, requiring companies to curate timely, local, relevant, and accurate information.” She also suggested that the platforms beef up their content moderation practices.

The other panel expert was Tristan Harris, president of the Center for Humane Technology and a former designer at Google. For years, Harris has been vocal about the perils of algorithmically driven media, and his opening remarks didn’t stray from that view. “We are now sitting through the results of 10 years of this psychologically deranging process that have warped our national communications and fragmented the Overton window and the shared reality we need as a nation to coordinate to deal with our real problems.”

One of Harris’s proposed solutions is to subject social media companies to the same regulations that university researchers face when they do psychologically manipulative experiments. “If you compare side-by-side the restrictions in an IRB study in a psychology lab at a university when you experiment on 14 people—you’ve got to file an IRB review. Facebook, Twitter, YouTube, TikTok are on a regular, daily basis tinkering with the brain implant of 3 billion people’s daily thoughts with no oversight.

Apples and oranges

“What we need to do is compare what are the regulations and protections we apply in one domain and we’re not applying in a different domain,” Harris said.

The hearing quickly shifted to Section 230 of the Communications Decency Act, the reform of which has been mooted by members of both parties. One bill introduced last year would require platforms with more than 10 million users to obtain permission from people before serving them algorithmically tailored content. If the companies fail to do so, they would not receive protection under Section 230. Another bill would similarly revoke Section 230 immunity if algorithms spread misinformation that leads to violence.

The panel concluded by exploring alternative business models for social media platforms, ones that wouldn’t be as reliant on algorithms to drive engagement and ad views. Harris suggested that electric utilities might be one such model. Rather than encouraging people to leave their lights on all the time—which would make them more money but work against the societywide goal of energy conservation—regulations have been set up to discourage flagrant overuse, and a portion of the profits are put into funds to ensure the sustainability of the electric grid.

“Imagine the technology companies, which today profit from an infinite amount of engagement, only made money from a small portion of that,” he said. “And the rest basically was taxed to put into a regenerative public interest fund that funded things like the fourth estate, fact checkers, researchers, public interest technologists.”

It took Facebook two months to realize “Stop the Steal” might turn violent

It took Facebook less than two days to shut down the original “Stop the Steal” group but two months for it to realize that the group and its offspring had spawned a “harmful movement” that thrived on the platform and would ultimately lead to violence.

The news comes from a Facebook internal report analyzing the company’s response to the events leading up to and culminating in the January 6 insurrection at the US Capitol. Reporters at BuzzFeed News obtained the report, titled “Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement,” and published the document today after Facebook reportedly began restricting employees’ access to it.

The social media company was apparently unprepared for the idea that people would use their own accounts to spread misinformation and calls for violence and other antidemocratic behavior. Among the conclusions, Facebook acknowledged that, while it had prepared tools to combat “inauthentic behavior,” which might include provocations from a fake account run by Russian intelligence operatives, for example, the company was woefully unprepared to confront “coordinated authentic harm.” (Emphasis Facebook’s.)

Groups affiliated with “Stop the Steal,” when compared with other civic groups, were 48 times more likely to have at least five pieces of content classified as “violence and incitement” and 12 times more likely to have at least five pieces of hateful content.

The original “Stop the Steal” group was created on election night, November 3, by Kylie Jane Kremer, a pro-Trump activist and the daughter of Amy Kremer, a political operative and Tea Party organizer. The group spread disinformation about the US election results, claiming falsely that there was enough voter fraud to change the outcome. The group grew quickly, with 320,000 members and a reported million more on the waitlist by the time it was shut down on November 5.

But despite being taken down for “high levels of hate and violence and incitement,” Facebook did not appear to think the group’s motivation was terribly harmful. “With our early signals, it was unclear that coordination was taking place, or that there was enough harm to constitute designating the term”—presumably an action that would have designated similar groups as harmful or hateful.

Because there was no designation, splinter groups quickly popped up and thrived for two months. Even a couple of days after the insurrection, 66 groups were still active, the largest of which was private but bragged that it had 14,000 members.


The rapid growth of those groups was due to what Facebook calls “super-inviter” accounts, which sent more than 500 invites each, according to the report. Facebook identified 137 such accounts and said they were responsible for attracting two-thirds of the groups’ members.

Many of those “super-inviter” accounts appeared to be coordinating across the different groups, including communication that happened both on and off Facebook’s various platforms. One particular user employed disappearing stories, which are no longer available on the platform after 24 hours, and chose his words carefully so as to avoid detection, presumably by automated moderation.

The Facebook report suggests that future moderation should look more closely at groups’ ties to militias and hate organizations. “One of the most effective and compelling things we did was to look for overlaps in the observed networks with militias and hate orgs. This worked because we were in a context where we had these networks well mapped.”

While Facebook may have mapped the networks, it’s had a spotty record in taking action against them. In fact, as recently as last month, the site was found autogenerating pages for white supremacist and militia movements if a user updated their profile to include those groups as their employer.

The report makes clear that this was a learning experience for the group. One of the main conclusions is that the investigators “learned a lot” and that a task force has developed a set of tools to identify coordinated authentic harm. It also notes that there is a team “working on a set of cases in Ethiopia and Myanmar to test the framework in action.”

“We’re building tools and protocols and having policy discussions to help us do this better next time,” the report says.

Facebook CEO Mark Zuckerberg.
Enlarge / Facebook CEO Mark Zuckerberg.
Chip Somodevilla/Getty Images

The Federal Trade Commission on Wednesday urged a federal judge in DC to reject Facebook’s request to dismiss the FTC’s high-stakes antitrust lawsuit. In a 56-page legal brief, the FTC reiterated its arguments that Facebook’s profits have come from years of anticompetitive conduct.

“Facebook is one of the largest and most profitable companies in the history of the world,” the FTC wrote. “Facebook reaps massive profits from its [social networking] monopoly, not by offering a superior or more innovative product because it has, for nearly a decade, taken anticompetitive actions to neutralize, hinder, or deter would-be competitors.”

The FTC’s case against Facebook focuses on two blockbuster acquisitions that Facebook made early in the last decade. In 2012, Facebook paid $1 billion for the fast-growing startup Instagram. While Instagram the company was still tiny—it had only about a dozen employees at the time of the acquisition—it had millions of users and was growing rapidly. Mark Zuckerberg realized it could grow into a serious rival for Facebook, and the FTC alleges Zuckerberg bought the company to prevent that from happening.

The story is the same for WhatsApp, the FTC says. “Facebook’s own messaging app, Facebook Messenger, was launched in 2011, but was already too far behind WhatsApp to prevent WhatsApp from gaining scale,” the FTC writes. “In 2014, Facebook acquired WhatsApp for $19 billion. The acquisition neutralized WhatsApp as a nascent threat and thereby deprived, and continues to deprive, users of the benefits of competition from an independent WhatsApp.

Finally, the FTC argues that Facebook attached anticompetitive conditions to companies that joined Facebook Platform, a set of APIs that allowed third-party apps to obtain data about Facebook users.

“Between 2011 and 2018, Facebook made Facebook Platform available to developers only on the condition that their apps neither competed with Facebook nor promoted its competitors,” the FTC writes. “Facebook punished apps that violated these conditions by terminating their access to the Find Friends API and other APIs.”

The motion to dismiss is the first major step in the litigation process. It allows defendants to quickly dispose of lawsuits that are frivolous or based on invalid legal theories. At this stage in the litigation, the court is supposed to assume that the plaintiff’s allegations are true and dismiss the lawsuit if the plaintiff would lose the case anyway.

But the FTC argues that most of Facebook’s motion to dismiss quibbles with facts in the FTC’s complaint—such as the FTC’s claim that Facebook has market dominance—rather than arguing that the FTC’s case is legally groundless. Facebook will have a chance to dispute the FTC’s factual claims, of course. But it will have to wait until later phases of the litigation process to do that, the FTC said.

The FTC filed its lawsuit during the Trump administration, but we shouldn’t expect the agency to be any more sympathetic to Facebook under President Joe Biden. Biden recently nominated Lina Khan, an antitrust crusader whose scholarship has focused on tech giants like Amazon, to a seat on the five-member FTC. If she is confirmed, we can expect her to be an advocate for vigorous pursuit of the FTC’s Google and Facebook cases—and perhaps launch new cases against other tech giants as well.

The phone numbers and personal data of more than 500 million Facebook users has been posted online by a low-level hacker in a forum for free.

Alon Gal, CTO of Hudson Rock, a cybercrime intelligence firm first discovered the leak on Saturday.

Read more:
Head of Facebook Canada warns news posts could be blocked as last resort

“All 533,000,000 Facebook records were just leaked for free,” he wrote in a tweet. “This means that if you have a Facebook account, it is extremely likely the phone number used for that account was leaked.”

Story continues below advertisement

Facebook acknowledged the news in an emailed statement Saturday afternoon, but said the data was obtained during a breach in 2019.

“This is old data that was previously reported on in 2019,” a Facebook spokesperson said. “We found and fixed this issue in August 2019.”

Click to play video: 'Big Tech CEOs testify on spread of misinformation, extremism online before Congress'

Big Tech CEOs testify on spread of misinformation, extremism online before Congress

Big Tech CEOs testify on spread of misinformation, extremism online before Congress – Mar 25, 2021

However, according to Gal, the vulnerability allowed hackers to see the phone numbers and other personal information of Facebook users.

“It was severely under-reported and today the database became much more worrisome,” he wrote.

By Gal’s count, 3,494,385 users in Canada were affected.

Story continues below advertisement

Global News has reached out to Facebook to confirm how many accounts have been affected in Canada and to determine how the company is handling the leak, but did not immediately hear back.

However, Gal said user’s phone numbers, full names, locations, birthdate, email addresses, and relationship status are among the details leaked.

Click to play video: 'Why Facebook banned news in Australia'

Why Facebook banned news in Australia

Why Facebook banned news in Australia – Feb 20, 2021

“Bad actors will certainly use the information for social engineering, scamming, hacking and marketing,” he wrote.

© 2021 Global News, a division of Corus Entertainment Inc.