The UK will have an online safety bill

How Parents Can Help Keep Their Kids Safe on Social Media after the Social Media Leaked Papers: A State-of-the-Art Study

Better laws can help reestablish competition, restrain bad behavior and even strengthen democracy by realizing the potential of social media. In short, policymakers can ensure the question of who owns Twitter, or Instagram or TikTok, doesn’t matter quite so much.

But over the last two years, little has been done to improve the safety of young users on social media in the US. The majority of the bills introduced by lawmakers have failed to get the necessary support to go to a floor vote. Many of these bills, like Sens. Ed Markey (D-MA) and Josh Hawley’s (R-MO) measure to update a standing online child safety law, do much of what Biden asked for Tuesday, like banning platforms like Instagram and YouTube from targeting ads to minors.

Michela Menting, a digital security director at market research firm ABI Research, agreed that social media platforms are “offering very little of substance to counter the ills their platforms incur.” Their solutions, she said, put the onus on guardians to activate various parental controls,such as those intended to filter, block and restrict access, and more passive options, such as monitoring and surveillance tools that run in the background.

For now, guardians will need to learn how to use the parental controls while also being aware that teens can often circumvent those tools. Here’s a closer look at what parents can do to help keep their kids safe online.

After the fallout from the leaked documents, Meta-owned Instagram paused its much-criticized plan to release a version of Instagram for kids under age 13 and focused on making its main service safer for young users.

The popular short form video app currently offers a Family Pairing hub, which allows parents and teens to customize their safety settings. It is possible for a parent to link their TikTok account to their teen’s app and set parental controls, including how long they can spend on the app each day, restrictions on exposure to certain content, and whether their account is private. Parents can find the Guardian’s Guide on TikTok.

One option encourages users to take a break from the app, by suggesting they take a deep breath, writing something down, check a to-do list, or listen to a song after a certain amount of time. Instagram also said it’s taking a “stricter approach” to the content it recommends to teens and will actively nudge them toward different topics, such as architecture and travel destinations, if they’ve been dwelling on any type of content for too long.

In the past, there were a few existing safety measures for young users of Snapchat, such as banning them from having public profiles and requiring teens to be mutual friends before they can start communicating. Teen users have their Snap Map location-sharing tool off by default but can also use it to disclose their real-time location with a friend or family member even while their app is closed as a safety measure. A friend check up tool allows users to make sure they are still in touch with people they care about.

TikToks: Safe and Secure Online Video Filtering and Management for Teen Users in the Presence of Minority Violence and Sexual Exploitation

The company told CNN Business it will continue to build on its safety features and consider feedback from the community, policymakers, safety and mental health advocates, and other experts to improve the tools over time.

In July, TikTok announced new ways to filter out mature or “potentially problematic” videos. The new safeguards assigned a maturity score to videos that might contain mature or complex themes. It also rolled out a tool that aims to help people decide how much time they want to spend on TikToks. The tool allows users to set regular screen time breaks as well as give a dashboard showing the number of times a user has opened the app and daytime and nighttime usage.

In addition to parental controls, the app restricts access to some features to younger users, such as Live and direct messaging. A pop-up also surfaces when teens under the age of 16 are ready to publish their first video, asking them to choose who can watch the video. Push notifications are stopped after 10 p.m. for users under the age of 17.

Discord did not appear before the Senate last year but the popular messaging platform has faced criticism over difficulty reporting problematic content and the ability of strangers to get in touch with young users.

If the person invited the other person in the room to chat or if the user used a private group, it is possible to connect with strangers on a public server. By default, all users — including users ages 13 to 17 — can receive friend invitations from anyone in the same server, which then opens up the ability for them to send private messages.

Do You Know What You’re Up to: How Do Platforms Will Get Their Own Privacy Notice? A Counterexample to the Carnegie UK Trust and Other Concerns about the Online Safety Bill

For the past ten years, the largest companies in the tech industry were allowed to mark their own homework. They’ve protected their power through extensive lobbying while hiding behind the infamous tech industry adage, “Move fast and break things.”

The Carnegie UK Trust noted that there are no specific processes to define what significant harm is or how platforms would have to measure it. The bill would drop the requirement that Ofcom should encourage development and use of technologies for regulating access to electronic material. Other groups have raised concerns about the removal of clauses around education and future proofing—making this legislation reactive and ineffective, as it won’t be able to account for harms that may be caused by platforms that haven’t gained prominence yet.

Legislation to tackle some of these harms will come into effect in the UK, but it won’t go far. The effectiveness of the online safety bill has been raised many times by campaigners, think tanks and experts. The bill doesn’t specifically name any minoritized groups, even if they are disproportionately affected by online abuse.

In 2020 and 2021, YouGov and BT (along with the charity I run, Glitch) found that 1.8 million people surveyed said they’d suffered threatening behavior online in the past year. Twenty-three percent of those surveyed were members of the LGBTQIA community, and 25 percent of those surveyed said that they had experienced racist abuse online.

The First Lady of the Union, J.B. Biden, speaks out against Silicon Valley Spontaneous Privacy Practices and Child Online Protections

The president attempted to rally bipartisan support to finally resolve a number of long-standing privacy, safety, and competition issues facing the tech industry. Over the more than hourlong address, Biden called on Congress to pass new rules protecting user data privacy and boosting competition in the tech industry.

The address echoed much of what Biden said during his first State of the Union address last year. The Biden administration, along with Congress, has been troubled by the issue of child online safety for years, as evidenced by the recent leak of internal company documents detailing the mental health risks of young users when using Meta platforms. A testament to the administration’s desire for stricter online protections was attended by Haugen as a guest of First Lady J.B. Biden.

Biden touted his administration’s work to bolster US competitiveness against China, leveraging the primetime spot to tout the $52 billion CHIPS and Science Act that included $52 billion in funding to boost US semiconductor manufacturing. Biden did not speak about whether the administration would ban TikTok.

If you talk to US House Republicans, President Joe Biden delivered an offensive, hyperpartisan diatribe last evening. Hell, if you just listened to the State of the Union address, you’d have heard the commander-in-chief heckled as a “liar,” blamed for the opioid epidemic—“It’s your fault!”—or heard him met with a thunderous and sustained Republican “BOOOOOOOOOOO!”

Most of the rambunctious Republicans in the US House of Representatives set aside their rowdy ways when Biden spoke out against their common Silicon foe.

Are Social Media Prejudices Really Important? Cory Booker, Tina Smith, and other experts on the risk of social media for kids

Data privacy—a bipartisan concern that’s historically devolved into partisan squabbling and inaction at the end of each congressional session—owned the night. But a popular line in a speech doesn’t mean the US will have a national privacy law anytime in the foreseeable future.

“You saw the people on both sides of the aisle stand up, so that’s a good sign,” says US senator Cory Booker, a New Jersey Democrat. We have a lot of data about teens and preteens that shows negative effects on self-esteem, self-concept, and well-being. So I think he is right, as the leader of our nation, to express and sound alarms of concern.”

Yeah. It’s a big deal,” says Senator Tina Smith, Democrat from Minnesota. “How I interpreted that is: We don’t really fully know, or understand, the impact of social media on kids. I talk to child psychologists and other experts about the risk of it and how dangerous it is.

Previous post AMC is working with a company called zoom to turn some theaters into meeting rooms
Next post The picture of a snow leopard won the Wildlife Photographer of the Year award