Sending "genuinely threatening" or "knowingly false" messages are among new criminal offences being added to proposed online safety laws.
If passed, the government's online safety bill could see social networks fined 10% of their global turnover if they fail to remove harmful content.
The changes mean they will also have to proactively remove harmful content.
The bill also covers revenge porn, human trafficking, extremism and promoting suicide online.
It already stated that websites, such as Facebook and Twitter, that host user-generated content would have to swiftly remove illegal content once it was reported to them.
Now they will also have to put in place proactive measures to stop illegal activity.
The issue has become a talking point recently with the racist abuse of footballers, revenge porn and cyberflashing, and Covid disinformation being highlighted as key safety concerns for social media companies to address.
Culture Secretary Nadine Dorries urged online platforms to start making the changes now before the bill comes into force.
Speaking to BBC Breakfast, she said: "They can start doing what they need to do to remove those harmful algorithms and to remove much of the damage that they do, particularly to young people and to society as a whole."
The minister added the move would "hold the feet to the fire" of social media companies that have "damaged lives" and hold them to account for the first time.
- Harmful content could evade new online law – MPs
- Facebook is 'making hate worse', whistleblower says
- Twitter says Online Safety Bill needs more clarity
- I get abuse and threats online – why can't it be stopped?
Judy Thomas, whose daughter Frankie took her own life aged 15 in September 2018, said her school tablet and computer had been used to access distressing material in the hours and months before her death.
She told BBC Radio 4's Today programme: "Back in January, February, March  she'd been accessing, at school, horrendous sites."
Ms Thomas called for mandatory age verification to protect children online and said all websites should be included in the scope of the bill, rather than just larger platforms.
"We need to ensure young people simply cannot access online harms, and that if companies do the wrong thing… that there is a real price to pay," she said.
Ms Thomas added these penalties must not just be financial, as many businesses could "absorb that easily", but the heads of these firms must also be "held to account".
The big technology companies say they welcome the "clarity" that the online safety bill brings and that they recognise the need for regulation.
They have, in many cases, not waited for governments to step in and they have invested in tech such as machine learning in order to identify harmful content at scale.
However, while many say the bill does not go far enough, social media businesses point to free speech issues.
They say they want to ensure that new rules don't stifle people's access to information by causing companies to "over-moderate" by removing too much content, in order to comply with them.
The vast scope of the online safety bill means it will always have its critics.
But experts underline that its introduction will be nothing short of a revolution in how the online world is policed, and mean it will be a very different place for the next generation.
Asked about age verification, Ms Dorries said the government was looking at the idea, but said there was a "downside" to requiring all children to verify their age to access the internet.
"And young people go on to the internet to go shopping, you know, on clothes. Do we need to ensure that they verify their age when they're doing that?", she added.
The government confirmed offences to have been added to the list of priority offences, which must be removed by platforms under the changes, include:
- Revenge porn
- Hate crimes
- The sale of illegal drugs or weapons
- The promotion or facilitation of suicide
- People smuggling
- Sexual exploitation
Previously, companies were only forced to take down these posts after they were reported – but will now be required to be proactive in preventing people from seeing them in the first place.
The government said naming these offences also enabled Ofcom – the proposed regulator – to take faster action.
The changes come after three separate parliamentary committee reports warned the bill required strengthening and more clarity for tech firms.
New criminal offences
In addition, three new criminal offences have been added to the bill.
The first is sending "genuinely threatening communications" such as a threat to rape, kill or cause financial harm, or coercive and controlling behaviour and online stalking.
Sending "harmful communications", such as a domestic abuser sending an ex-partner a photograph of their front door to frighten them, is the second. However, offensive content with no intent to cause serious distress would not be illegal.
The final new offence is "knowingly false communications", which would cover messages deliberately sent to inflict harm", such as a hoax bomb threat.
The government said the bill would not prohibit "misinformation", such as a social media post promoting a fake coronavirus cure, as long as those spreading it were unaware what they were saying was false.
If you or someone you know are feeling emotionally distressed, information on where you can go for support is available on BBC Action line here.
Have you been affected by harmful messaging online? Tell us your story by emailing: firstname.lastname@example.org.
Please include a contact number if you are willing to speak to a BBC journalist. You can also get in touch in the following ways:
- WhatsApp: +44 7756 165803
- Tweet: @BBC_HaveYourSay
- Or fill out the form below
If you are reading this page and can't see the form you will need to visit the mobile version of the BBC website to submit your question or comment or you can email us at HaveYourSay@bbc.co.uk. Please include your name, age and location with any submission.