RampancyLambentRaven

http://www.atlanticcouncil.org/images/publications/Sovereign_Challenge_Report_091718_web.pdf https://web.archive.org/web/20181015034247/http://www.atlanticcouncil.org/images/publications/Sovereign_Challenge_Report_091718_web.pdf

Read it and download it yourself. Before they memory hole this monstrosity. The deep state is deeply concerned about uncontrolled media and is working on censorship. Their biggest concern is not “disinformation”, but rather “post-truth”. Disinformation would be an incorrect fact, but post-truth would be a rejection of a subjective truth. The real issue is not what you read, but what you think.

They view fact checking and online banning as bandaids. Fact checking will refute unauthorized talking points without explicitly repeating them (strawmanning) to ensure that you don’t hear it from them too. Banning will be sold as purging fake accounts. All of that is reactive though. Their main goal is a proactive propaganda service to get out ahead of “fake news”.

The ascendancy of disinformation and its political ramifications could also arise as a result of a paradigm shift we are experiencing but have simply not yet adapted to. In his book,The Signal and the Noise, Nate Silver argues that the disruption we are currently experiencing is no different from the “information overload” experienced after the creation of the printing press is 1440. Silver argues that the sudden accessibility of large quantities of information can overwhelm society. He argues that too much useless and poor-quality information results in increased isolation and polarization driving people to down select—that is, actively reducing the number of choices they consider—to limited sources of information and simplified narratives that fit our preconceived biases.

As professional curators and gatekeepers dwindle and are replaced by self-appointed and unaccountable ones, individuals are forced to assess information in ways for which they are not trained.

RampancyLambentRaven

2012-2016 https://archive.is/ZfMWy - How Obama’s Team Used Big Data to Rally Voters https://archive.is/grlwo - Obama, Facebook and the power of friendship: the 2012 data election https://archive.is/mpPkw - Data You Can Believe In. The Obama Campaign’s Digital Masterminds Cash In https://archive.is/rwLMI - Facebook reveals governments asked for data on 38,000 users in 2013 https://archive.is/L5ZmZ - The Android Administration Google’s Remarkably Close Relationship With the Obama White House, in Two Charts https://archive.is/qiQp9 - How the Obama campaign won the race for voter data https://archive.is/korgP - Obama, the ‘big data’ president https://archive.is/BOXog - Michael Dickerson: The invisible man behind Obamacare's tech surge https://archive.is/mUhLn - Google's Eric Schmidt Invests in Obama's Big Data Brains https://archive.is/7ec9j - The Obama Big Data Team’s New Frontier https://archive.is/qx1BW - Inside the Secret World of the Data Crunchers Who Helped Obama Win https://archive.is/fFR1x - Google employees have enjoyed revolving door during Obama administration https://archive.is/mJEv5 - 2012: The First Big Data Election https://archive.is/COMyt - The White House: Big Data is a Big Deal https://archive.is/7Fp1i - The White House: Unleashing the Power of Big Data https://archive.is/CqhxJ - Execs From Apple, Google And AT&T Secretly Met With Obama To Discuss Surveillance https://archive.is/D5m9b - Eric Schmidt: Obama's Chief Corporate Ally

2018 https://archive.is/sdC6B - Ex-Obama Campaign Director Drops Bombshell Claim on Facebook: 'They Were on Our Side' https://archive.is/Q0axY - Ex-Obama Campaign Director: It's 'Unfair' Facebook Let Us 'Ingest Entire Social Network of US'

Soros https://archive.is/UJqBQ - Soros: Google and Facebook are a menace to society

OY VEY~!

RampancyLambentRaven

Google, reddit, YouTube, facebook, twitter already shadowban users, comments, videos, what ever. They're removing and deleting comments. They're altering and editing comments and it is causing US workers to lose their jobs for no reason. The republicans and democrats are using middle men like the Atlantic council, ADL, SPLC etc and big tech as middle men to censor anyone they don't like. The government could fix this in less then 48 hours but is bought of by big Jewish Saudi $$$$$$ tech. We need a massive massive organized boycott against these big social media websites. Cancel your accounts and unplug. We the people must stand up against big tech it's time. Alphabet = Goolag alphabet also = FBI, CIA, NSA etc. Hmmmm. Makes one think. I would compare big tech and the spy federal agencies to Umbrella from Resident Evil we need anti trust + RICO charges against big tech. Expose the contributions, donations, and lobbying big tech has done to court the government and all the money the government has given big tech and how they clearly sleep with each other to fuck we the people over.

Adminstrater

I really hope the free speech laws in this country destroy the self appointed censorship lords.

VognerDuke76

Abandon Youtube. Go bitchute

whatisbestinlife

facebook started a group in my name and invited my friends to join. they wanted me to do an art project but it was full of typos and almost incomprehensible. didnt sound like me at all or even drunk me

Ceegen

Just in time for the midterms... lol.

PeacefulAssassin

The Killstream was experiencing this rhe other day. the chat was censoring certain words from certain regions it seemed like.

GoyimNose

I'm not buying it, Google would alter the comment before it showed up, why on earth would it change it front of the user? This is definitely JavaScript or something along those lines, OP confirmed faggot

cdglow

Google is definitely rigged in certain ways, but this looks like a bug, which you'd realize if you knew how modern web apps work. The way modern web applications work is that frontend updates are made optimistically in real time (for speed) and simultaneously hit an API in the background and rely on that as the source of the current application state which takes maybe 0.25 to a few seconds on average.

What it looks like is the user submitted a comment and the frontend added it to the comment list. In the background, it hits the API to submit the comment and the frontend returned a new list of comments for that view. That's where the error occurred: they probably mixed up the comment IDs somewhere in the code. When the new application state was returned, the view layer rendered that out. That's what you're seeing, they're not rewriting your comments in a quarter of a second.

People made posts here a few months back about "twitter unchecked all of my favorites right in front of my eyes" and it was the same issue.

Trust me, if Google wanted to alter comments, they could do so in a way that wouldn't be detectable to you.

6cd6beb

This is stupid.

That's where the error occurred: they probably mixed up the comment IDs somewhere in the code

Ok so you draft a comment and click submit, and instead of taking the text of your comment (which your local machine already has) and displaying it in the new location (posted comments instead of the box you draft in), they... post the comment, poll the API to get the ID of your comment, then poll the API to get the text of that comment, then display that where your comment should be?

How does that even remotely mesh with you saying

frontend updates are made optimistically in real time

Wouldn't an "optimistic real time update" just be taking the text you entered, formatting it to look like all the other posted comments and then displaying it on top of the other posted comments?

How in the world does "frontend updates are made optimistically in real time" mean your local machine pulls a "comment ID" from the API, then gets the text of that comment?

Learn more about the things you babble about before you babble about them, lowberg.

cdglow

Every app is different, especially ones that operate at a larger scale like YouTube.

But in a typical modern app, say you make a comment like "I like puppies". The frontend would make that show up in the comment list basically instantly (as fast as the DOM could be altered, probably faster than .1 seconds) so it looks like it's working super fast. At this point, it is NOT yet saved to Google's database.

It's done this way assuming the request succeeds (probably around 99.9999% of the time it does) so that it has the appearance of the App being lightning quick. This is done for good user experience because making a network request is slower than instantaneous and companies don't want people waiting around for actions to submit. Even half second delays here and there waiting for HTTP request responses add up and annoy people.

In the background at the same time, the JS would make a network request to Google telling them you said "I like puppies" and the API would respond, maybe a quarter second or maybe several seconds later. Because most requests succeed, this setup works great for good user experience most of the time.

Different APIs return different types of data when data is submitted. But typically the backend is the ultimate source of truth. In many APIs, when you make a POST to add new content, they return the new current state of the data for the view layer to re-render.

Many companies use a view layer like React to take the current app state and render it. That's likely close to what you're seeing half second after the submit. The API responded with some data about the current application state and the view layer is rendering what it believes to be the current state of the data. We don't have enough info to know whether the bug was on the backend or frontend, but my guess is that they messed up a comment ID somewhere, possibly with an off by one error.

Most Apps work something like this and around 99.9999% of the time, nothing goes wrong with a network request so you don't notice any issues.

There are some things in life I know little about and am open to learning more: but I guarantee you I know what I'm talking about in this domain, so fuck off niggerfaggot.

Abillionelectrons

You are correct in the way you explain how the web app works with the API, but you fail to explain how it would be possible to "smartly" modify the sentences people are posting. If it were a bug the output with most likely be garbled text.

cdglow

It is illogical for the content to come out as garbled text in the event of a bug. The more likely bug would have the wrong content being rendered by the view library.

Modern view libraries (React is probably the most well-known, but there are dozens of them) in and of themselves just render out the current state of the content they're given.

The error could be on the backend (wrong data being returned) or frontend (explained below)

As efforts such as GraphQL and others are still not super widely used technologies, this state is often built up and cached from the API return values from multiple endpoints. You might have an API endpoint for a video, you might have one to get user information, you might have one for comments and one for replies. Etc. Ultimately, these are built up together, processed, and cached for the frontend and the view library renders the content the instant it has changed. (this is why you saw the content change about the time you would expect the API return value to come in)

This data from these APIs often come in multiple nested formats, sometimes with references to other data. This can be very complicated and is not always a super simple problem. See here for example.

https://tonyhb.gitbooks.io/redux-without-profanity/content/normalizer.html

It is very easy to see how there might have been an off by one error or something else where an API value was processed wrong and the IDs were mangled and the wrong text was rendered in the box. Or maybe in some loop, some value wasn't being cleared. That would cause the same issue too. A bug could not cause the content to be garbled. At worst you'd see something like an empty box if the content couldn't be found.

I'm not trying to carry water for Google or YouTube. I'd love to clearly catch them in an act of outright censorship and fucking bury them. But I know this field, and it seems almost certain that this is what the user noticed based on my knowledge.

Please note that I'm not saying this definitively. I don't know what happened in this specific case. I just gave a description of how modern web apps typically work.

rootbeervloat

Yeah, I'm pretty sure that's not what's going on. I don't admit to know what is going on, but I'm certain that youtube isn't accidentally grabbing the wrong ID for his comment multiple times. Especially looking at the weird generic text and relation to what he was trying to type. It's also a bit suspect that we're seeing this on a big firearms channel... But as with all of this that's just speculation. I, for one, have seen enough from google to not trust their platform in any way/shape/form.

6cd6beb

Your first two paragraphs describe what I said happens, not what you said happens.

when you make a POST to add new content, they return the new current state of the data for the view layer to re-render.

Why. It already updated client side. When the user clicked "submit". What's backing the assumption that something changed between when you hit "submit" and when the server processed that request?

Furthermore, why would refreshing the document be bundled in with post requests? That removes the ability do to things like make a bunch of post requests, and then poll once to update the page instead of updating the page after each of fifty post requests.

That's where the error occurred: they probably mixed up the comment IDs somewhere in the code

This is still extremely stupid. A comment has text, a username, and an avatar. All of those things are contained in a single data structure some people like to call an "object". Is the code pulling the text in a completely different way than it's getting the username and avatar? Is it not fetching a comment from an array and then just pulling member variables for the comment.text, comment.username, and comment.avatar ? Are they just running getComment(lol_idk_whichever).text ?

Your appeal to authority busted. We can write this off as a big clumsy error by a programmer at one of the biggest tech companies in the world, not caught by anyone in the release process including their QA team, and yet YOU are some infallible authority on what's happening in code you can't even see? Because you can make guesses based on how you think "most apps work something like this"?

I guarantee you I know what I'm talking about in this domain

Okay pal that guarantee and a dollar will buy you fifty cents.

cdglow

Your first two paragraphs describe what I said happens, not what you said happens.

No, I gave a further description of what I've been saying all along. It's not my fault if you didn't understand it.

Why. It already updated client side.

The backend, not the many frontends that consume the backend's content, is the ultimate arbiter of application state. Because the state of some type of data changed and the request is happening anyway, that's viewed as a good time to update the state in some types of applications because a request is happening anyway.

There are many ways to handle the timing of updating web content. You can poll periodically. You can piggyback off of some other requests (what I described and what you seem to view as incomprehensible). You can use web sockets and other techniques for real-time updating. You can just wait for a page refresh.

What's backing the assumption that something changed between when you hit "submit" and when the server processed that request?

2 simple points.

  1. In a popular application, some types of state can indeed change in a few seconds until the server responds.
  2. But, we're not necessarily talking about that brief time window, but about the time window between arriving at a page and fetching the list of comments, watching a video, writing a comment, and submitting. The comments might be loaded when you arrive at that URL, but might not be refreshed until you submit a new comment. Again, it depends on application design. Doing it this way might or might not make sense for some Apps, for some API requests.

APIs return different types of data depending on App design. Some APIs might want to keep things very simple and basically just return a 201 response code for some types of POST requests. Some APIs might want to make things more elaborate and also return the current state of some type of data. There's a bunch of valid ways to do it.

Furthermore, why would refreshing the document be bundled in with post requests?

Answered above.

That removes the ability do to things like make a bunch of post requests, and then poll once to update the page instead of updating the page after each of fifty post requests.

It doesn't remove the ability to make a bunch of POST requests. There's nothing stopping you from making as you want. Your argument might be closer to valid if you said something like "It might be inefficient and hit the backend too hard to fetch all of the data and send that back to user on every post request." However, that is not the case in many applications.

  1. Caching is a thing.
  2. Certain API actions, such as a user posting a comment, are not typically going to happen in a rapid fire series. One way to design an application is you might update the application state when a user makes a comment, but might not do the same thing when more common POST requests like upvotes/downvotes are done. Again, there's a 100 ways to design an App and API based on the specific problem domain. Not all API requests in an App have to return state. Some might just need to return a success message. This is a valid way of designing an API.
  3. Many APIs have rate limiting and other functionality to protect them on top of that.

This is still extremely stupid. A comment has text, a username, and an avatar. All of those things are contained in a single data structure some people like to call an "object". Is the code pulling the text in a completely different way than it's getting the username and avatar? Is it not fetching a comment from an array and then just pulling member variables for the comment.text, comment.username, and comment.avatar ? Are they just running getComment(lol_idk_whichever).text ?

You kind of make a point here, but every programmer has created some bugs. Maybe the bug was in the frontend, maybe in the backend. Maybe the view library they use fucked up somewhere. It's easy to say "this shouldn't happen" when bugs are common in any software.

Your appeal to authority busted. We can write this off as a big clumsy error by a programmer at one of the biggest tech companies in the world, not caught by anyone in the release process including their QA team, and yet YOU are some infallible authority on what's happening in code you can't even see? Because you can make guesses based on how you think "most apps work something like this"?

I didn't make any absolute claim about YouTube in this instance. I described how a typical modern Application works and what the likely issue was. I can see this based on my experience as well as observing the timing of the video. The UI updated optimistically, but the comment changed in about the time that it would take for it to hit the backend and receive the response. Just seeing that alone is a huge clue about what happened.

Okay pal that guarantee and a dollar will buy you fifty cents.

It's clear you have some type of technology knowledge, but I don't think you have quite as much experience in how modern web apps generally work.

Do you have a different logical explanation for what's observed in the video? I've made my case. What's your explanation?

6cd6beb

It's categorically stupid to update the client's browser

  • when posting the message, and then again
  • when done posting the message.

You can say the backend is the ultimate source of truth, and that's right, but if you write sloppy code that pulls from it twice per comment (and god knows how many times how many other places) it's going to cause a lot of load on that server(farm), and moreover someone's going to notice and make a name for themselves by fixing it.

In a popular application, some types of state can indeed change in a few seconds until the server responds.

Again, don't constantly poll whatever database holds the comments or you're going to destroy that server.

we're ... talking about ... the time window between arriving at a page ... and submitting(a comment).

In my experience, you don't get a full refresh of the comments when you write your own. you just see your own at the top of the pre-existing list. That's anecdotal but we're looking at a black box so anecdotal's all we've got.

might be closer to valid if you said something like "It might be inefficient and hit the backend too hard to fetch all of the data and send that back to user on every post request."

That is what I'm saying. If every post request from every user comes with a full dump of the page content, it becomes inefficient to make more than one. Inefficiency isn't just inherently wrong, it costs shekels. They're not spending shekels so you can get up-to-the-second comment feeds in situations where they probably haven't changed anyway.

Caching is a thing.

We're talking about updating the client with client-generated content that was created a negligible amount of time ago. There's no need to cache it, it's in the post request. Make a post request with the comment object and then shove the same comment object into the top of the comment feed.

typically

one way

might

might not

Do you get fresh comments when you post one? If so then they're pulling new content. If not it starts stacking evidence that they're not.

It's easy to say "this shouldn't happen" when bugs are common in any software.

The structures here are simple enough that "whoopsie doodle" doesn't really produce a reasonable explanation.

What's your explanation?

Either the guy's got a script to fuck with the content like so https://files.catbox.moe/gcrgew.png in real time and is using it to troll people, or they're fucking with his comment. First one's a simpler explanation. Why would they let the client update with the content they saved to the backend if their intent was to fuck with it.

cdglow

It's categorically stupid to update the client's browser when posting the message, and then again when done posting the message.

Optimistic updates in some fashion are considered current modern best practice and are logical when considering user experience.

Even a delay of a couple of seconds puts off users and gets them to use your App less or even stop using your App which reduces the amount of money you make. The UI has to appear to be as close to instantaneous as possible, even though sending a network request and receiving and processing a response might take a few seconds or even slower on a bad connection. You don't want the user to click "submit a comment" and then see a loading spinner for a while. You want the comment to instantly be added (even if it's not yet permanently saved) while it does its thing invisibly in the background. Somewhere over 99+% of the time, this works as intended, and you don't even notice that a large number of websites work somewhat along these lines. The UI is often updated instantly while requests happen in the background and sometimes new data is fetched in the response.

I already wrote a novel on what I think likely happened.

We're also quibbling over a few minor things like caching at this point. The fact that a comment was added a thousandth of a millisecond ago doesn't mean caching the new state of the content is necessarily illogical, especially at the scale that Google operates. Caching isn't done for the first user in line: it's for the dozens/hundreds/thousands of subsequent requests that come afterwards.

To me personally, it looks like you're technically knowledgable but you don't have a large amount of experience with modern web development or we're talking in circles somewhat.

I will say this in conclusion.

  • I think what I explained is likely the issue and just seeing the timing of the update is probably the biggest clue. It is explainable perfectly if you understand how modern view layers generally work and just render out the current state of the content.
  • This doesn't mean that I can prove definitively what happened. The user could be fucking with people with some kind of manipulated content. I have no idea who that person is and how credible he is.
  • I think Google is a pile of shit, but I really doubt this is evidence of them rewriting content as many believe. If Google wanted to rewrite content, couldn't they do so in a way that wasn't as easily detectable by the user? Wouldn't the scores of pro-Hitler and pro-white and other thought-crimes be cleaned up rather than random nothing comments? I'm not saying it's impossible, but Google deliberately changing content doesn't seem anywhere near likely.

Anson

Do you blame people for thinking google would do this?

Chad88

I was actually going to guess that it was auto-translating it when it shouldn't. A lot of apps automatically translate user content but obviously not when the comment and user have the same language. I think that's the part that screwed up.

u_r_wat_u_eat

NOTHING TO SEE HERE FOLKS

jcd1158

One paragraph to dismiss intense amounts of censorship because "Well yea they did A B and C but they wouldn't do D" Twitter had a similar "glitch" where they SHADOWBANNED conservatives

GlamourSpork

Thank you for explaining it. I had no idea. I don't code for a living, so in my head you type a comment, your comment should show up. I'm really hoping it's just a simple bug. So this dude could wait a few mins and refresh the comments on the video and it would be what he typed?

WORF_MOTORBOATS_TROI

Wouldn't we be able to easily check that by going back to look at the comments using a different device?

Atomized_Individual

Or Tor

cdglow

Of course: though it would be smartest to use a different unassociated device on a different IP address to be sure.

Another thing to do is to look in Web Inspector and see what raw data the API returned: it's possible to perhaps use that to figure out where in the chain there's a bug (though I bet if there was a bug that Google would have fixed it fairly quickly)

TwitterBannedIt

^this.

That actually is the solution to everything (lul).

Network tab tells all, both directions.

Tallest_Skil

Why bother altering when they already shadowban?