Apple has announced that it plans to roll out a system for checking photos for child abuse imagery on a country-by-country basis, depending on local laws.

The company had said earlier that it would implement a system that screens photos for such images before they are uploaded from iPhones in the United States to its iCloud storage.

Child safety groups lauded Apple’s efforts as it joined Facebook, Microsoft, Google in taking such measures.

However, this decision by Apple raised eyebrows as it can be misused by governments. Many other technology companies check photos after they are uploaded to servers.

Apple further revealed that it would make plans to expand the service based on the laws of each country where it operates.

The company said nuances in its system, such as “safety vouchers” passed from the iPhone to Apple`s servers that do not contain useful data, will protect Apple from government pressure to identify material other than child abuse images.

Apple comes with a human review process that acts as a backstop against government abuse, it added. The company will not pass reports from its photo checking system to law enforcement if the review finds no child abuse imagery.

Regulators have time and again demanded that big tech firm should do more when it comes to child abuse or taking down illegal content.

Few laws in Britain could be used to force tech companies to act against their users in secret.

Facebook`s WhatsApp, the world’s largest fully encrypted messaging service, is also under pressure from governments that want to see what people are saying, and it fears that will now increase.

 WhatsApp chief Will Cathcart tweeted a barrage of criticism against Apple for the new architecture.

“We`ve had personal computers for decades, and there has never been a mandate to scan the private content of all desktops, laptops or phones globally for unlawful content,” he wrote. “It’s not how technology built in free countries works.”