Tag: facebook

January 4, 2021

Do You Have To Hand Over Your Facebook Password To Your Employer?


With the rise of social networking sites Managers want to review publicly available Facebook profiles, Twitter accounts and other websites to learn more about job candidates and Employees.

Many Facebook users don’t have their profiles set to private, which would make them available only to selected people or certain networks. There is a big difference between an Employer viewing public postings and private information.

Employees and potential employees should have a right to keep their Facebook, Twitter or other social media profiles containing confidential information private from the prying eyes of their bosses.

Employees and job applicants have an expectation of privacy when it comes to using social media such as Facebook and Twitter, including a right to have their right to free speech protected.

There is no need for Employers to demand access to applicants or employees’ private password protection information stored online. Yet there have been a number of reports of job applicants during interviews being asked to log into their Facebook and other websites and allow potential Employers to browse their profile, acquaintances and other personal information, whilst others are simply being asked to list their passwords on job applications. Other job applicants have even been asked to friend human resource managers!

Traditionally Employers havn’t demanded that job applicants hand over the keys to their house or bank account information, therefore why should they be able to gain access to their private information stored online?

It is like allowing your Employer to access your mail to see if there is anything of interest inside. It would be giving Employers the ability to act as an imposter and assume the identity of an employee, continually access, monitor and possibly even manipulate an employee’s personal social activities, communications, associations and opinions.

Facebook’s Statement of Rights And Responsibilities provides that sharing or soliciting a Facebook password is a violation of their terms of service, however Employers can’t count on Facebook suing an Employer for such a violation.

Legislation is being passed to address the current situation, based on the Computer Fraud and Abuse Act, which deals with computer hacking, to safeguard Employees’ online identities. The Passwords Protection Act of 2012 will prohibit employers accessing “protected computers” where social media files are kept.

Subject to some exceptions, it will prevent employers from forcing employees and prospective employees to provide access to their private online systems, including Facebook, e-mail and other online storage.

The Bill is broad in its drafting and isn’t limited to any specific website. It focuses on the servers where information is housed or stored, taking the emphasis off having to identify and define a particular type of internet service. This means an employer will not be able to force an employee to provide access to their Facebook or Twitter account as a condition of their employment.

It will forbid Employers forcing employees to provide access to information held on any computer not owned or controlled by the employer. The Act will protect information even if it’s accessed on a computer owned by the employer. It will also prohibit an employer from discriminating or retaliating against a prospective or current employee if that employee refuses to provide access to a password-protected account.

Therefore if an employee is just looking at a social network on their work computer, an employer won’t be able to force the employee to disclose a password, as this would then enable the employer to access another computer, being the computer of the relevant social nework the employee is perusing. The protection conferred by the Act extends to gmail accounts, photo sharing websites and employee owned smart phones.

As the Act is drafted in a manner which is largely technology-neutral, it’s effect is not likely to be impacted by changes in technology. New online technologies continue to evolve and emerge, causing legislation to become outdated. However, because the Passwords Protection Act of 2002 isn’t limited to the protection of a particular service like a social network service, it is flexible enough to foreshadow evolving uses of technology, as it focuses on access to a computer.

The Act covers any new service as long as it’s not housed on any employer’s computer. There are however exceptions in the legislation and students are not protected from social media monitoring and can therefore be forced to hand over their social network passords.

However there is another Act to be introduced, the Social Networking Online Protection Act, which will plugs this gap by conferring protection upon both employers and students.

However if you are a Government employee or an employee who works with children under the age of 13, the Act enables States legislating in this area to provide an exemption, whilst another exception enables the executive branch to exempt entire classes of workers if they come into contact with classified information, including soldiers. These exceptions to the protection conferred therefore sanction broad and sweeping fishing expeditions into Employees’ private lives and communications.

There are already in existence a broad range of means for investigating employee misconduct. Further, internet activities constantly create many new types of records, and these can already be used against employees in investigations.

CONCLUSION

Whilst the new legislation is a major step forward in preventing Employers taking adverse action as a result of an employee’s refusal to provide access to their private accounts, Employers still reserve the right to permit social networking activities within their office only on a voluntary basis and implement policies in relation to employer operated computers. Once you are employed you may be required to sign an acceptable use policy relating to the use of social media within the workplace. You may have enjoy online free speech however you may be asked to sign a non-disparagement agreement which bans you from talking negatively about an employer on social media sites. Therefore employees who violate such acceptable use policies will still remain accountable for any activities which breach such policies.


September 27, 2020

Facebook Provides Tips on Utilizing Video Playlists and Series Collections


Over the past couple of months, Facebook has been sharing a weekly series of insights into its various video tools via interviews with the Facebook staff who are working on them, which has provided some new and valuable perspective into why they’ve added each function and what needs it can serve.

Thus far, Facebook has shared interviews looking at:

The latest video in the series, published this week, looks at video Playlists and Series, and how each option can help video creators maximize their exposure and viewership in different ways.

The video goes through the difference between the Playlist and Series options, and why the options were created:

“Facebook Watch is a video destination and as Facebook Watch continues to grow, we actually really wanted to focus on making publishing videos on Facebook way more flexible than it ever has been before, and with that, we wanted to create powerful new ways to publish, organize and drive discovery around your content. Playlists and Series is a way to do that, and a content format type to enable three things – organizing videos, driving discovery, and publishing content.”

Facebook also provides further definition as to the purpose of each option:

  • Playlist – A collection of videos that shares a particular theme or a topic, not a group of ‘episodes’ in one series as such.
  • Series – Videos in a Series are, as you might expect, all within the same program sequence, collecting one program into a set. Series also enables creators to put together trailers and add additional seasons, so viewers can watch content in chronological order

Facebook says that the difference between the two options is that, if you’re content counts as episodes in a set, then its a Series, but if you’re doing tutorials and thematic content that doesn’t necessarily fit into a traditional TV series format, but can be grouped together, then Playlist is a better option.

So why should creators use these options?

Well, aside from helping to guide viewers through your series’ and sets in sequential order, Facebook also notes that grouping your content can increase engagement by keeping people looking through more content that you’ve created. 

In addition to this, Facebook notes that when you organize your content into themes and topics, you increase your chances of being found in related searches.

“When people are actually searching for your playlists and series, videos that are in those playlists and series will actually appear more highly in search results.”

Facebook also notes that Playlists and Series each have a unique URL, which can help you drive more viewership for your collections by directing your audience to your video collections.

Creators can build Playlists and Series in Creator Studio, with specific options in the Content Library tools.

Facebook video playlist creation in Creator Studio

Creators can also add their videos to an existing Playlist or Series at the upload stage within Facebook, which also applies to bulk uploads.

It’s not a revolutionary function that will suddenly bring you millions more views guaranteed, but with Facebook looking to emphasize Facebook Watch, and keep people coming back to its video platform, Playlists and Series can play a key role in maximizing engagement, and aligning with how Facebook looks to promote its unique video content.

If you have enough videos in a certain theme, it’s worth considering both options, and testing to see whether they help increase viewership. 

Free Speech Social Media Platform


September 27, 2020

Is Facebook Bad for Society? New Insights on the Company’s Approach Raise Important Questions


Is Facebook a positive or negative influence on society – and does the company, or indeed anybody in a position to enact any type of change, actually care either way?

This question has been posed by many research reports and academic analyses over the years, but seemingly, the broader populous, at least in Western nations, really hadn’t give it a lot of consideration until the 2016 US Presidential Election, when it was revealed, in the aftermath, that foreign operatives, political activists and other groups had been using Facebook ads, posts and groups to influence US voter activity. 

Suddenly, many realized that they may well have been manipulated, and while The Social Network has now implemented many more safeguards and detection measures to combat ‘coordinated inauthentic behavior’ by such groups, the concern is that this may not be enough, and it could be too late to stop the dangerous impact that Facebook has had, and is having, on society overall.

Facebook’s original motto of ‘move fast and break things‘ could, indeed, break society as we know it. That may seem alarmist, but the evidence is becoming increasingly clear.

Moving Fast

The launch of Facebook’s News Feed in September 2006 was a landmark moment for social media, providing a new way for people to engage with social platforms, and eventually, for the platforms themselves to better facilitate user engagement by highlighting the posts of most interest to users.

At that time, Facebook was only just starting to gain momentum, with 12 million total users, though that was already more than double its total audience count from the previous year. Facebook was also slowly consuming the audience of previous social media leader MySpace, and by 2007, with 20 million users, Facebook was already working on the next stage, and how it could keep people more engaged and glued to its app.

It introduced the Like button in 2007, which gave users a more implicit means to indicate their interest in a post or Page, and then in 2009, it rolled out the News Feed algorithm, which took into account various user behaviors and used them to define the order in which posts would appear in each individual’s feed – which, again, focused on making the platform more addictive, and more compelling.

And it worked – Facebook usage continued to rise, and on-platform engagement skyrocketed, and by the end of 2009, Facebook had more than 350 million total users. It almost doubled that again by the end of 2010, while it hit a billion total users in 2012. Clearly, the algorithm approach was working as intended – but again, in reference to Facebook’s creed at the time, while it was moving fast, it was almost certainly already breaking things in the process. 

Though what, exactly, was being broken was not clear at that stage.

This week, in a statement to a House Commerce subcommittee hearing on how social media platforms contribute to the mainstreaming of extremist and radicalizing content, former Facebook director of monetization Tim Kendall has criticized the tactics that the company used within its growth process, and continues to employ today, which essentially put massive emphasis on maximizing user engagement, and largely ignore the potential consequences of that approach. 

As per Kendall (via Ars Technica):

“The social media services that I and others have built over the past 15 years have served to tear people apart with alarming speed and intensity. At the very least, we have eroded our collective understanding – at worst, I fear we are pushing ourselves to the brink of a civil war.”

Which seems alarmist, right? How could a couple of Likes on Facebook lead us to the brink of civil war? 

But that reality could actually be closer than many expect – for example, this week, US President Donald Trump has once again reiterated that he cannot guarantee a peaceful transfer of power in the event of him losing the November election. Trump says that because the polling process is flawed, he can’t say that he’ll respect the final decision – though various investigations have debunked Trump’s claims that mail-in ballots are riddled with fraud and will be used by his opponents to rig the final result.

Trump’s stance, in itself, is not overly surprising, but the concern now is that he could use his massive social media presence to mobilize his passionate supporter base in order to fight back against this perceived fraud if he disagrees with the result.

Indeed, this week, Trump’s son Don Jnr has been calling on Trump supporters to mobilize an ‘army’ to protect the election, which many see as a call to arms, and potential violence, designed to intimidate voters.

Trump army post

Note where this has been posted – while President Trump has a massive social media following across all the major platforms, Facebook is where he has seen the most success in connecting with his supporters and igniting their passions, by focusing on key pain points and divisive topics in order to reinforce support for the Republican agenda among voter groups.

Why does that seemingly resonate more on Facebook than other platforms? 

Because Facebook prioritizes engagement over all else, and posts that generate a lot of comments and discussion get more traction, and thereby get more distribution via Facebook’s algorithm. Facebook also offers complex ad targeting options which have enabled the Trump campaign to hone in on specific pain points and concerns for each group.

Facebook Trump ad

By using custom audiences, the Trump campaign is able to press on the key issues of concern to each specific audience subset, more effectively than it can on other platforms, which then exacerbates specific fears and prompts support for the Trump agenda.

How you view that approach comes down to your individual perspective, but the net result is that Facebook essentially facilitates more division and angst by amplifying and reinforcing such through its News Feed distribution. Because it focuses on engagement, and keeping users on Facebook – and the way to do that, evidently, is by highlighting debate and sparking discussion, no matter how healthy or not the subsequent interaction may be.

It’s clearly proven to be an effective approach for Facebook over time, and now also the Trump campaign. But it could also, as noted by Kendall, lead to something far worse as a result.

Civil Unrest

But it’s not just in the US that this has happened, and the Trump campaign is not the first to utilize Facebook’s systems in this way. 

For example, in Myanmar back in 2014, a post circulated on Facebook which falsely accused a Mandalay business owner of raping a female employee. That post lead to the gathering of a mob, which eventually lead to civil unrest. The original accusation in this instance was incorrect, but Facebook’s vast distribution in the region enabled it to grow quickly, beyond the control of authorities.

In regions like Myanmar, which are still catching up with the western world in technical capacity, Facebook has become a key connector, an essential tool for distributing information and keeping people up to date. But the capability for anyone to have their say, on anything, can lead to negative impacts – with news and information coming from unofficial, unverified sources, messages can be misconstrued, misunderstood, and untrue claims are able to gain massive traction, without proper checks and balances in place.   

We’ve seen similar in the growth of pseudoscience and conspiracy theories in western regions – the growth of the anti-vax movement, for example, is largely attributed to Facebook distribution.

Anti vax searches chart

As you can see in this chart, using Google Trends data, searches for ‘anti-vax’ have gained significant momentum over the last decade, and while some of that is attributable to the terms used (people may not have always referred to ‘anti-vax’), it is clear that this counter-science movement has gained significant traction in line with the rise of The Social Network.

Is that coincidence, or could it be that by allowing everyone to have such huge potential reach with their comments, Facebook has effectively amplified the anti-vax movement, and others, because the debate around such sparks conversation and prompts debate?

That, essentially, is what’s Facebook’s News Feed is built upon, maximizing the distribution of on-platform discussions that trigger engagement.

As further explained by Kendall:  

“We initially used engagement as sort of a proxy for user benefit, but we also started to realize that engagement could also mean [users] were sufficiently sucked in that they couldn’t work in their own best long-term interest to get off the platform. We started to see real-life consequences, but they weren’t given much weight. Engagement always won, it always trumped.”

Again, Facebook’s race to maximize engagement may indeed have lead to things being broken, but various reports from insiders suggest Facebook didn’t consider those expanded consequences. 

And why would it? Facebook was succeeding, making money, building a massive empire. And it also, seemingly, gives people what they want. Some would argue that this is the right approach – adults should be free to decide what they read, what they engage with, and if that happens to be news and information that runs counter to the ‘official narrative’, then so be it.

Which is fine, so long as there are no major consequences. Like, say, the need to be vaccinated to stop the spread of a global pandemic.

Real World Consequence

This is where things get even more complex, and Facebook’s influence requires further scrutiny.

As per The New York Times:

“A poll in May by The Associated Press-NORC Center for Public Affairs Research found that only about half of Americans said they would be willing to get a coronavirus vaccine. One in five said they would refuse and 31 percent were uncertain.”

US medical leader Dr Anthony Fauci has also highlighted the same concern, noting that “general anti-science, anti-authority, anti-vaccine feeling” is likely to thwart vaccination efforts in the nation.

Of course, the anti-vax movement can’t purely be linked back to Facebook, but again, the evidence suggests that the platform has played a key role in amplifying such in favor of engagement. That could see some regions take far longer than necessary to recover from the COVID-19 pandemic, so while the debate itself may seem relatively limited – and Facebook had allowed anti-vax content on its platform till last year, when it took steps to remove it – the actual consequences can be significant. And this is just one example.

The QAnon conspiracy theory had also been allowed to gain traction on The Social Network, before Facebook took steps to remove such last month, the violent ‘boogaloo’ movement saw mass engagement on the platform till Facebook announced new rules against such back in June, while climate change debates have been allowed to continue on the platform under the guise of opinion. In each case, Facebook had been warned for years of the potential for harm, but the company failed to act until there was significant pressure from outside groups, which forced its response.

Is that because Facebook didn’t consider these as significant threats, or because it prioritized engagement? It’s impossible to say, but clearly, by allowing such to continue, Facebook benefits from the related discussion and interaction on its platform.

The history shows that Facebook is far too reactive in these cases, responding after the damage is done with apologies and pledges to improve.

Again, as noted by Kendall:

“There’s no incentive to stop [toxic content] and there’s incredible incentive to keep going and get better. I just don’t believe that’s going to change unless there are financial, civil, or criminal penalties associated with the harm that they create. Without enforcement, they’re just going to continue to be embarrassed by the mistakes, and they’ll talk about empty platitudes… but I don’t believe anything systemic will change… the incentives to keep the status quo are just too lucrative at the moment.”     

This is where the true conflict of open distribution platforms arises. Yes, it can be beneficial to give everyone a chance to have their say, to share their voice with the world. But where do you draw the line on such?

Facebook CEO Mark Zuckerberg could clearly prefer for Facebook not to intervene:

“People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. I understand the concerns about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands. It’s part of this amazing expansion of voice through law, culture and technology.”

And that may reveal the biggest true flaw in Facebook’s approach. The company leans too far towards optimism, so much so that it seemingly ignores the potential damage that such can also cause. Zuckerberg would prefer to believe that people are fundamentally good, and we, as a society, can come together, through combined voice, to talk it out and come to the best conclusion.

The available evidence suggests that’s not what happens. The loudest voices win, the most divisive get the most attention. And Facebook benefits by amplifying argument and disagreement.  

This is a key concern of the modern age, and while many still dismiss the suggestion that a simple social media app, where people Like each others’ holiday snaps and keep tabs on their ex-classmates, can have serious impacts on the future of society, the case, when laid out, is fairly plain to see.

Investigations into such are now taking on a more serious tone, and the 2020 Election will be a key inflection point. After that, we may well see a new shift in Facebook’s approach – but the question is, will that, once again, prove too late?

Free Speech Social Media Platform


September 24, 2020

Facebook Detects Three New Russian-Based Networks Attempting to Interfere with Foreign Politics


This is a concern, as we head into the final stretch of the US Presidential election campaign.

This week, Facebook has reported its discoveries of three new Russian-based networks that have been attempting to use Facebook to interfere in foreign political debates.

The three networks are as follows:

  • 14 Facebook users, 35 Pages, 18 Groups and 34 Instagram accounts which Facebook has traced back to the Russian military
  • 1 Page, 5 Facebook accounts, 1 Group and 3 Instagram accounts linked to the Russian Internet Research Agency (the group behind the majority of Russian-based interference operations during the 2016 US Presidential campaign)
  • 23 Facebook accounts, 6 Pages, and 8 Instagram accounts that have been linked back to Russian intelligence services

Cumulatively, around 74k people followed these Pages on Facebook, 9.5k had joined their associated groups and around 15k followed the profiles on Instagram. So in terms of scale, the impact is not massive – but then again, impact can’t be measured in initial audience size in this respect, as it only takes a small group to plant a seed that can then become a much bigger point of debate and division among Facebook audiences.

In assessing their purpose, Facebook found that these operations were mostly focused on distributing content around local elections and geopolitical conspiracies, and included reports on COVID-19 misinformation, foreign trade sanctions, police brutality and more. And all of them gave at least some focus to the US election and Presidential candidates. 

Facebook post example

Of course, it’s no major surprise to see Russian-based operatives looking to influence the US election, as they did in 2016, but it’s concerning to see such activity ramping up just 40 days out from the poll. 

How you view the findings then comes down to perspective. On one hand, these new discoveries show that Facebook’s efforts to detect and remove these clusters are working, as they’re finding more of them as time goes on.

But are they detecting all of them? Are these just the ones that Facebook has caught, and other groups are still using The Social Network to influence voter opinions?

We can’t know the full extent of such operations, but the findings show that foreign groups are certainly not going to stop trying to use The Social Network as a tool to incite voter action, in order to influence the final result of the November poll.

Hopefully, Facebook’s improved initiatives are detecting the majority of these groups before they can have any impact.

The next month will be the biggest test of Facebook’s detection systems to date.  

Free Speech Social Media Platform


September 23, 2020

Facebook Removes Cluster of China-Based Accounts Seeking to Interfere in Foreign Politics


With the US President Election only 40 days away, Facebook has this week reported that it recently detected and removed a cluster of China-based Facebook accounts and Pages that had been seeking to interfere in foreign politics, including the US.

The Chinese cluster incorporated 155 accounts, 11 Pages, 9 Groups and 6 Instagram accounts:

“133,000 accounts followed one or more of these Pages, around 61,000 people joined one or more of these Groups, and about 150 accounts followed one or more of these Instagram accounts.”

However, the US wasn’t the network’s primary focus – as per Facebook:

In Southeast Asia where this network focused most of its activity, they posted in Chinese, Filipino and English about global news and current events including Beijing’s interests in the South China Sea; Hong Kong; content supportive of President Rodrigo Duterte and Sarah Duterte’s potential run in the 2022 Presidential election; criticism of Rappler, an independent news organization in the Philippines; issues relevant to the overseas Filipino workers; and praise and some criticism of China. In the US, where this network focused the least and gained almost no following, they posted content both in support of and against presidential candidates Pete Buttigieg, Joe Biden and Donald Trump.”

The main focus seems to have been influencing the conversation around naval activity in the South China Sea, including US Navy ships – though as Facebook notes, the group had also sought to distribute content related to the US election.

Facebook China post

The finding is a concern given the ongoing tension between the US and China, sparked in some ways by US President Donald Trump seeking to hold China accountable for the outbreak of COVID-19 

Since his election in 2016, Trump has sought to pressure China on US trade deals, but his criticisms over the pandemic have further eroded US-China relations.

In a recent speech to the UN General Assembly, Trump once again reiterated his criticisms:

“​The Chinese government and the World Health Organization – which is virtually controlled by China – falsely declared that there was no evidence of human-to-human transmission. Later, they falsely said people without symptoms would not spread the disease.”

Those tensions were also a part of the US Government’s push for the sell-off of TikTok, with Trump noting that the threat to ban the app was part of retaliatory efforts against China for the spread of COVID-19

Given this, the fact that a Chinese backed group has been detected seeking to use Facebook to influence US voters in any way is a major concern. Facebook says that the cluster was detected as part of its regular internal investigations into coordinated inauthentic behavior, which, hopefully, bodes well for future enforcement efforts.

But it’s another concern to keep an eye on heading into the peak of the campaign period.

In addition to this, Facebook has also reported detecting another, smaller network originating from the Philippines. 

“They posted in Filipino and English about local news and events including domestic politics, military activities against terrorism, pending anti-terrorism bill, criticism of communism, youth activists and opposition, the Communist Party of the Philippines and its military wing the New People’s Army, and the National Democratic Front of the Philippines.”

Facebook has significantly ramped up its detection and enforcement efforts in this respect in the aftermath of the 2016 US election, but now is the ultimate test of these new systems.

Hopefully, this new action is a positive sign that Facebook’s processes are detecting such groups. 

Free Speech Social Media Platform


September 23, 2020

Facebook’s Removing its Restrictions on Text Content in Facebook Ad Images


This is a significant update for Facebook Advertisers. According to reports, Facebook is removing its restrictions on ads which include more than 20% text in the main image.

As shared by social media expert Matt Navarra, Facebook is contacting advertisers direct to inform them of the update, explaining that:

“…we will no longer penalize ads with higher amounts of image text in auctions and delivery.”

We’ve asked Facebook for further confirmation, but as per the above note, the platform is currently in the process of updating its documentation to reflect this update – on the Facebook Help page about text in ad images, for example, it now says:

Facebook text in ad images

A previous version of this overview outlined the specific ad limits:

“To create a better experience for viewers and advertisers, ads that appear on Facebook, Instagram, and the Audience Network are screened based on the amount of image text used in your ad. Based on this review, advertisements with a higher percentage of image text may not be shown. Please note that exceptions may apply to certain ad images. For example, exemptions apply to book covers, album covers and product images.”

The rule, as you can see, was pretty clear – ads with too much text in their images would not be approved.

Facebook even provided a Text Overlay tool to check if that your ad aligns with the 20% restriction.

Facebook text overlay tool

That tool is also no longer available, as Facebook looks to ease back its text in ad image restrictions.

Facebook’s long-held text restrictions in ad images have caused major headaches for many advertisers, requiring significant, specific reformatting of ad images in order to align with Facebook’s ad rules. At times, Facebook’s enforcement process in this respect has also been flawed, so it’ll be a relief to many to see those limits taken away.

Why did Facebook have the text limit at all?

Over the years, Facebook has repeatedly noted that Facebook and Instagram users dislike ads with too much text in the main image, so it’s restricted such seemingly to improve the general user experience.

Facebook did. however, change its ad text rules back in 2018, which enabled marketers to include more text in their ads, but their ad reach would be restricted as a result, relative to how much you exceeded the limit (this is reflected in the image above).

This new update, apparently, removes any reach restriction for this, meaning that ads with more than 20% text in the main image will be displayed, as normal, and will reach the same amount of people as any other Facebook ad. As you can see in the official explanation, Facebook still maintains that ad images with less than 20% text perform better, and recommends that advertisers “keep your text short, clear and concise in order to get your message across effectively”.

But if you include more text in your ad image, your ad will run, and could theoretically reach just as many people as any other campaign, depending on your approach.

We’ll update this post with more information from Facebook as it’s provided.

Free Speech Social Media Platform


September 23, 2020

Facebook Shares New Insights into Gen Z, and How COVID-19 has Changed Their Outlook [Infographic]


While the COVID-19 pandemic has had wide-reaching impacts, with older people statistically the most susceptible to the virus itself, younger people, are also seeing significant shifts in their educational and career trajectory, which could have long-lasting effects.

As noted by Facebook:

“For Gen Zers, the COVID-19 pandemic has struck at a particularly formative time, disrupting educational journeys, career opportunities and more. Yet many Gen Zers seem to be weathering it in a way that leaves them transformed, stronger – energized around who they are, what they stand for and what drives them.”

Indeed, the societal changes happening as a result of COVID-19 seems to have reinforced many traits already evident in younger demographic cohorts, including increased awareness of causes, and the impacts on various aspects of society. And that could alter the way marketers look to connect with these audiences, in order to maximize brand messaging and better link into significant shifts.

To provide more insight on this, Facebook recently analyzed survey responses from over 15,000 people to get a better understanding of how Gen Z is responding to the pandemic, and what that means for advertising and outreach. You can read Facebook’s full “Meet the Future: Gen Z’s Regeneration” report here, but the key data points are summarized in the graphic below.

Some valuable insight for your planning.  

Facebook Gen Z research

Free Speech Social Media Platform


September 16, 2020

3 Ways Facebook is Supporting Mental Health




Humanity