The Retweet was originally an abbreviation adopted by the Twitter community to indicate attribution in cases of resharing a tweet. Twitter eventually listened to its users and coded the automatic Retweet into its interface. The automatic Retweet prevented modification of the Tweet and simply transplanted the Tweeter’s profile photo and text directly onto the Retweeter’s profile, like a donation of attention from one account to another. Many users saw this as an assault on their nomenclature of choice – why could Twitter not have favoured their preferred method of resending information? :

[Optional: Retweeter’s additional text] RT @username [Original tweet text and links]

Tweetdeck, a Twitter monitoring software, protected the original citation style by building it into their interface. When you clicked to Retweet a tweet using Tweetdeck, it would bring you to a darkened version of the tweet that could be automatically retweeted or edited in the original Retweet format:

RT @username: [Original tweet text and links]

TweetDeck logo followed by a greater than sign, followed by the Twitter logo. On the right is a question mark.
Twitter's rebranding of the TweetDeck logo to a black bird on a blue background, shortly after acquiring the company.

Tweetdeck’s userbase rose, to the point where it could justify opening a directory for its fans. Most likely recognizing that growth, Twitter purchased the company. There was some speculation that Twitter may have bought out Tweetdeck to keep it out of Ubersuite’s hands, but I think Twitter had equal motivation to satisfy greater demand for more powerful monitoring software and didn’t want to keep losing so many users to a secondary service. Also, they needed an alternative to their excessively simplistic mobile interface that would provide more monitoring power on-the-go. Since then, they have completely rebranded the web and desktop versions of Tweetdeck, substituting the raven with a Twitter bird. Interestingly, the Twitter bird for the main web interface is a simple blue with no outline, whereas the bird to represent Tweetdeck is the same shape, but black with a blue background.

The interface of TweetDeck Mobile after you click, 'Manual RT' with a tweet written within.
The interface of TweetDeck Mobile after you click, "Manual RT".

Initially, as a part of the rebrand Twitter slashed the Tweetdeck directory and destroyed the option to perform manual Retweets. In place, they forced users to 'Quote Tweet' and imprison the Retweeted text and username in quotation marks. Besides being very visually jarring, quotation marks tend to throw off certain monitoring software and do not permit the range of communication styles many users prefer. The MT, or modified Retweet, is used to represent an altered quotation, which is more clunky to accomplish using the 'Quote Tweet' function. Obviously, Twitter wants to discourage users from changing what the original author said. Not necessarily a bad thing, but it comes into conflict with their users' creative attempts to beat the tight word limit.

The interface to either automatically or manually RT a tweet on TweetDeck, with an arrow and text that say, 'STOP IT' pointing towards the manual RT button. Reaction to the 'Quote Tweet' feature, via Matt Silverman on Mashable[/caption]

The last bastion of RT-nostalgia lay in the TweetDeck mobile app. Despite an update since Twitter's acquisition of Tweetdeck, the mobile app still allows you to edit an automatic Retweet and will insert the RT before the @username. Symbolically, Twitter has yet to force Tweetdeck mobile app to drop the yellow raven in favour of the blue bird, although I'd wager the next major update might change that.

More recently, Twitter updated their TweetDeck Web and Desktop versions to include an 'Edit & RT' option, which mimics the old manual Retweet style, and have done away with the option to 'Quote Tweet.'

Putting the Right Face Forward

Since there is too much variability in the content to consider the question of whether an edited RT with additional text is better, I'm going to focus purely on the question of whether it is better for a user to see a novel or familiar face or profile photo in their Timeline of Tweets. This assumes that the Retweeter is more familiar to the viewer of the tweet than the Retweetee (account being Retweeted), which obviously does not hold true for every case in a network. Still, I think it's a fairly representative thought experiment for most Twitter relationships.

Another confounding factor is the small grey text that appears above a tweet when it is automatically Retweeted, identifying it as a Retweet and listing the Retweeter's name. Since this could certainly be used as a visual hook, or conversely as a visual skipping cue, it could change some of the considerations I lay out in this post. However, I think it's faint enough and small enough that it's unlikely to be one of the chief targets of visual focus during a rapid saccade.

What grabs your attention in the Twitter Timeline?

When someone scans down their Twitter timeline, deciding to stop on a particular tweet and invest additional attention could be based on a number of factors. However, there are only so many visual anchors for them to use during their scanning process. These include:

  • The profile photo
  • The username and full name
  • The tweet content (with highlighted text for usernames, hashtags and links)
  • The tweet time
  • The 'expand' button
  • A media button at the button of the tweet

Determining whether the automatic or the manual RT is worth more will rely on knowing which of these visual anchors the majority of users focus on during their visual scans, also known as a visual saccade.

Watching you, Watching Twitter

Since no eye-tracking studies to date have been conducted regarding tweets, the best I could find were these Youtube videos that detail that ocular focus of an "expert" Twitter user in the process of navigating the website. From the Tobii Eye Tracking Youtube Channel, watch the entire video if you have time and watch for the differences between where the focus lies during the faster downward scrolling movements. This is an older version of Twitter, but the essential aspects of the timeline structure remain unchanged.

 When the expert user performs faster scans (begins at 45 seconds), the focus starts moving towards the username, but usually not quite reaching the profile photos. Slower scanning is devoted to the body of the Tweets and seemed to make up a fair proportion of the overall focus during the video.

They also have a video of a beginner Twitter user, but it's relatively similar to the expert user, although with just slightly more focus on the profile photos during slow and fast saccades.

Does this mean that profile photos are irrelevant to the scanning process? In my opinion, that's unlikely, especially when you consider just how shaking the focal point on the screen was during those faster saccades. Also, it's important to note that current eye and gaze-tracking technology focuses largely on the center of the pupil, we are missing significant amounts of data on the additional elements within a user's field of view. In their 2008 review paper about the future of eye tracking in research for online searching, Lorigo et al. admits:

"... it is important to note that eye tracking does not tell us how much users perceive in their peripheral field; to the best of our knowledge, nearly no literature studying peripheral vision exists from which we can effectively extrapolate to the context of online searching."

In 2001, Asress and Carpenter wrote in Vision Research that the systems that determine whether we react to 'stop' signals during a saccade for peripheral and central vision are different. Supporting the notion that peripheral data might be playing a bigger role than pupil-tracking can tell us, they point out that both the peripheral and central vision stop processes seem to respond with the same speed.

How do we react to novel stimuli?

To determine whether it is more advantageous to have your own profile pic (manual retweet) vs. someone else's profile pic (automatic retweet), you can look at some research that has been done regarding our reactions to novel stimuli. Park, E. Shimojo and S. Shimojo published in PNAS regarding the "roles of familiarity and novelty in visual preference judgements."

The 22 subjects in their experiments exhibited no preference for familiar or novel geometric figures, but preferred novel landscapes and familiar faces. One important question; when so many people use profile pictures that are mostly-landscape, or where their faces take up a tiny portion of the thumbnail, can these be considered faces when shrunk for a tweet, or would our brain compute them as geometric features/landscapes?

A figure depicting a graph with relative preference, where novelty is the negative y-axis and familiariy is the positive. On the x-axis, Faces, Geometric Figures and Natural Scenes are compared over multiple trials.
A figure from Park, Shimojo & Shimojo, 2010.

A good quotation found in Jeremy M. Wolfe's review paper on "asymmetries in visual search" outlines the relevant Treisman Hypothesis:

"... it is easier to detect a deviant among standard stimuli than to find the standard stimulus hiding among deviants (Treisman and Gormican, 1988)."
The review goes on to mention work by Shen and Reingold (2001) where they demonstrated that, while the relative familiarity or the novelty of the user's target for the visual search was not important, having distractors that were familiar was important.

Said differently, if you are targeting 'good' content and all tweets can potentially be considered distractions, their results applied to this scenario might suggest that it would be more efficient to search for a visually novel tweet (automatic Retweet) than another tweet that were less so (manual Retweet). However, their symbol and character-based experiments are very simple compared to the visually complicated attention-trap of the Twitter Timeline and there is nothing to suggest that the studies mentioned here are necessarily transferable. Also, the question of whether preference for familiar faces, outlined by Park et al., outweighs the importance of novelty as a visual stop signal during the saccade remains unanswered.

Conclusion: Speed Matters?

From a visual perspective, I would suggest that Twitter Timeline browsing can be broken into two different activities:

  • Slow, purposefully scanning that focuses on the content of the tweets, as well as user name and full name of the user
  • Faster scanning that sticks to the left side of the tweet, focusing on the username and likely absorbing important peripheral information about the profile photo

In the former situation, perhaps our preference for familiar faces might make it more likely that a manual RT, with a familiar face, would receive attention. However, when the saccade speed exceeds the speed where reading is possible, the photo likely plays a much larger role in attracting attention. Does a novel signal, which the Treisman hypothesis suggests is easier to identify, mean that novel profile photos are also easier to use as stop signals during a saccade? Or does the familiarity of an identifiable photo draw us in even at higher speeds?

Let me know what you think in the comments and I may use them in future updates to this post.

References