Instagram recently announced it is working on bringing back the old chronological feed. In a series of QandA stories, on Friday, Instagram head Adam Mosseri answered people’s queries about the return of the chronological feed.  The new algorithmic feed prioritized content based on users’ activity to achieve personal experience. Despite the assertion by the company about its advantages, the algorithmic feed did not gain traction from a majority of users. Instagram now says it has been working on reviving the chronological feed and will release it early next year.

the company is experimenting with two different versions of the chronological feed. The first version will provide you with an option to choose favorites, which will be visible right at the top. The other version will show content from feeds of everyone you follow in chronological order on your feed.

In the report, published  last Tuesday, the Tech Transparency Project (TTP) created seven fake accounts for teen users aged 13, 14, 15, and 17. Instagram did not stop those accounts from searching for drug-related content. In one case, the platform auto-filled results when a user started typing “buyxanax” into the search bar. One suggested account was a Xanax dealer.

After following the account of a Xanax dealer, a fake minor user got a direct message “with a menu of products, prices, and shipping options,” the report found. A fake minor account that followed an Instagram dealer got suggestions to follow an account selling Adderall.

The report also found that Instagram’s guardrails against drug-related content aren’t working well. The platform bans many drug-related hashtags, like #mdma, but when the fake minor users tried to search that hashtag, Instagram suggested alternates — like #mollymdma. The company will review hashtags to check for policy violations.

The report comes during a period of renewed scrutiny of how Instagram and Facebook affect the mental and physical health of its adolescent and teen users. A group of academic researchers published an open letter last Monday calling for Meta to be more transparent about its research on the mental health of its young users.

Meta (formerly Facebook) is testing live chat support for people, including creators, who have been locked out of their accounts. The test, currently available in the US, focuses on those who cannot access their accounts due to unusual activity or whose accounts have been suspended due to a violation of community standards. “On the Facebook App specifically, we’ve started testing live chat help for some English-speaking users globally, including creators,” the company said in a statement late on Friday.

This will be the first time Facebook has offered live help for people locked out of their accounts. The company has also begun a small test to provide support through live chat for English-speaking creators in the US who do not already have an assigned relationship manager from Meta to help with questions they might have about Facebook or Instagram. “Creators can access a dedicated creator support site when logged in through Facebook. There, they can chat live with a support agent for help on various issues ranging from status of a pay-out to questions about a new feature like Reels.

Then there’s Twitter and how the new CEO is trying to get them up to speed. Although Twitter currently has a way to add content warnings to tweets, the only way to do it is to add the warning to all your tweets. In other words, every photo or video you post will have a content warning, regardless of whether it contains sensitive material or not. The new feature it’s testing lets you add the warning to single tweets and apply specific categories to that warning. It appears that when you’re editing an image or video, tapping the flag icon on the bottom right corner of the toolbar lets you add a content warning. The next screen lets you categorize the warning, with choices including “nudity,” “violence,” or “sensitive.” Once you post the tweet, the image or video will appear blurred out, and it’s overlaid with a content warning that explains why you flagged it. Users can click through the warning if they want to view the content. They also are testing a few more new tools that will probably be released in 2022.