Fewer than 200 people watched the original live video of the Christchurch massacre, Facebook has said.
None of them reported it immediately to Facebook during the attack, and it took half an hour after the killer started his live video for anyone to report it using Facebook's reporting tools, the company said.
However, this has been challenged. Jared Holt, a reporter for Right Wing Watch, said he was alerted to the livestream and reported it during the attack.
Police carry flowers left by well wishers to the Al Noor Mosque in Christchurch. Fifty people died in the shootings on Friday.
"I was sent a link to the 8chan post by someone who was scared shortly after it was posted. I followed the Facebook link shared in the post. It was mid-attack and it was horrifying. I reported it," Holt tweeted.
* Christchurch mosque shooting accused not allowed TV or newspapers in prison
* 'You just think about what those people went through' - top Facebook executive
* Christchurch shooting demonstrates how social media is used to spread violence
"Either Facebook is lying or their system wasn't functioning properly."
Holt then checked and could find no record of his report on Facebook's internal tool for listing the reports users send off.
"I definitely remember reporting this but there's no record of it in Facebook. It's very frustrating," Holt told Business Insider.
"I don't know that I believe Facebook would lie about this, especially given the fact law enforcement is likely asking them for info, but I'm so confused as to why the system appears not to have processed my flag."
Facebook declined to comment when contacted by Business Insider.
Facebook vice president Chris Sonderby said the social media giant is working around the clock to prevent the video from being shared again.
"The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast," Sonderby said in a statement.
"Including the views during the live broadcast, the video was viewed about 4000 times in total before being removed from Facebook.
"The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended."
The link to the live-stream was posted on anonymous message board 8chan, and shortly after the 17-minute video ended, a download link for it was also posted on the site.
Facebook removed the video and "hashed" it to automatically prevent it being uploaded again, but some users added watermarks or edited the video in order to slip it past the detection algorithms.
In the first 24 hours after the shooting, Facebook removed about 1.5 million versions of the attack video.
"More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services," Sonderby said.
"We have been working directly with the New Zealand Police to respond to the attack and support their investigation."
Prime Minister Jacinda Ardern has spoken to Facebook chief operating officer Sheryl Sandberg since the attack.
The Government's Cabinet meeting on Monday is expected to be mostly focused on gun law but it is understood the Government is also keen to call on social networks to do more to fight radicalisation in the wake of the mosque shootings. This could include a call to share more data directly with intelligence agencies.
The Global Internet Forum to Counter Terrorism - a consortium of global technology firms including Facebook, Google and Twitter - said it shared the digital "fingerprints" of more than 800 edited versions of the video.
Neal Mohan, YouTube's chief product officer, told The Washington Postthat his platform also struggled to moderate the video successfully on its platform.
His team finally took unprecedented steps - including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company's detection systems, he said.
Despite such efforts, concerns have been raised by a professor of engineering and information sciences about social media's failure to implement preventative measures.
Professor Katina Michael of the University of Wollongong said algorithms can only do so much to prevent certain content being uploaded and human moderators are already forced to wade through screes of questionable content.
"The best algorithms couldn't have stopped this. Having said that, if you [Facebook] can't stop it, don't offer it. If you want to provide the service, perhaps you have to vet the users."
Michael said the current algorithms were set up based on a corporate model that was centred around generating revenue, not looking for controversial content. "It is the failure of not only the algorithms, but human moderators."
Australian prime minister Scott Morrison has asked G20 members to consider practical ways to force companies like Facebook and Google to stop broadcasting atrocities and violent crimes.
Sonderby said Facebook is committed to working with leaders in New Zealand and other governments to help counter hate speech and the threat of terrorism.
Meanwhile, police probing the online presence of the terror suspect and his involvement in far-right chat boards and other internet activity have met with some resistance.
In one email exchange, New Zealand police requested an American-based website preserve the emails and IP addresses linked to a number of posts about the attack, but were met with an expletive-filled reply.
- Stuff with AAP and BusinessInsider.com.au
Katina Michael in Matthew Rosenberg, March 20, 2019, “Alarm raised about Facebook livestream mid-attack in Christchurch, man claims”, stuff.nz, https://www.stuff.co.nz/national/christchurch-shooting/111412396/fewer-than-200-people-watched-shooters-christchurch-massacre-live-video-facebook-says
Disclaimer: The way I was quoted seems to imply that the content moderators at Facebook were partially to blame. This is not what I said in the interview with Matthew. Moderators are not paid to catch this kind of content; they are paid to investigate copyright and controversial content. Humans are at the mercy of the machine on this occasion. It can be likened to 100 people trying to stop leaks in 200,000 buckets. It just cannot happen. In terms of what could have stopped this footage from spreading? Ensuring more predictive AI algorithms, and also total information surveillance of everything coming through servers, and still that is not foolproof.