We’d thought we’d covered this before, and it was one year ago yesterday that we did a post about Twitter experimenting with a new feature that would scan your tweet for language that could be harmful and put up a prompt to see if you really meant to send it in the heat of the moment.

A year later, that feature is apparently being rolled out, according to NBC News.

NBC News reports:

The tech company said Wednesday it was releasing a feature that automatically detects “mean” replies on its service and prompts people to review the replies before sending them.

“Want to review this before Tweeting?” the prompt asks in a sample provided by the San Francisco-based company.

Twitter users will have three options in response: tweet as is, edit or delete.

In the tests, it found that if prompted, 34 percent of people revised their initial reply or did not reply at all. After being prompted once, people composed on average 11 percent fewer offensive replies in the future, according to the company.

The tests helped to train Twitter’s algorithms to better detect when a seemingly mean tweet is just sarcasm or friendly banter, the company said.

Can Twitter’s algorithm tell if this is sarcasm? We think this is a great idea and we’re glad Twitter is policing our speech before we “say” it.

Well, bless your heart.

So far, there’s nothing stopping you from posting your “mean” tweet; Twitter’s just stepping in and giving you a chance to think about it first.