A college student from Germany interested in all things tech.
95 stories
·
71 followers

Deutschlandtakt and Country Size

1 Share

Does the absolute size of a country matter for public transport planning? Usually it does not – construction costs do not seem to be sensitive to absolute size, and the basics of rail planning do not either. That Europe’s most intensely used mainline rail networks are those of Switzerland and the Netherlands, two geographically small countries, is not really about the inherent benefits of small size, but about the fact that most countries in Europe are small, so we should expect the very best as well as the very worst to be small.

But now Germany is copying Swiss and Dutch ideas of nationally integrated rail planning, in a way that showcases where size does matter. For decades Switzerland has had a national clockface schedule in which all trains are coordinated for maximum convenience of interchange between trains at key stations. For example, at Zurich, trains regularly arrive just before :00 and :30 every hour and leave just after, so passengers can connect with minimum wait. Germany is planning to implement the same scheme by 2030 but on a much bigger scale, dubbed Deutschlandtakt. This plan is for the most part good, but has some serious problems that come from overlearning from small countries rather than from similar-size France.

In accordance with best industry practices, there is integration of infrastructure and timetable planning. I encourage readers to go to the Ministry of Transport (BMVI) and look at some line maps – there are links to line maps by region as well as a national map for intercity trains. The intercity train map is especially instructive when it comes to scale-variance: it features multihour trips that would be a lot shorter if Germany made a serious attempt to build high-speed rail like France.

Before I go on and give details, I want to make a caveat: Germany is not the United States. BMVI makes a lot of errors in planning and Deutsche Bahn is plagued by delays; these are still basically professional organizations, unlike the American amateur hour of federal and state transportation departments, Amtrak, and sundry officials who are not even aware Germany has regional trains. As in London and Paris, the decisions here are defensible, just often incorrect.

Run as fast as necessary

Switzerland has no high-speed rail. It plans rail infrastructure using the maxim, run trains as fast as necessary, not as fast as possible. Zurich, Basel, and Bern are around 100 km from one another by rail, so the federal government invested in speeding up the trains so as to serve each city pair in just less than an hour. At the time of this writing, Zurich-Bern is 56 minutes one-way and the other two pairs are 53 each. Trains run twice an hour, leaving each of these three cities a little after :00 and :30 and and arriving a little before, enabling passengers to connect to onward trains nationwide.

There is little benefit in speeding up Switzerland’s domestic trains further. If SBB increases the average speed to 140 km/h, comparable to the fastest legacy lines in Sweden and Britain, it will be able to reduce trip times to about 42 minutes. Direct passengers would benefit from faster trips, but interchange passengers would simply trade 10 minutes on a moving train for 10 minutes waiting for a connection. Moreover, drivers would trade 10 minutes working on a moving train for 10 minutes of turnaround, and the equipment itself would simply idle 10 minutes longer as well, and thus there would not be any savings in operating costs. A speedup can only fit into the national takt schedule if trains connect each city pair in just less than half an hour, but that would require average speeds near the high end of European high-speed rail, which are only achieved with hundreds of kilometers of nonstop 300 km/h running.

Instead of investing in high-speed rail like France, Switzerland incrementally invests in various interregional and intercity rail connections in order to improve the national takt. To oversimplify a complex situation, if a city pair is connected in 1:10, Switzerland will invest in reducing it to 55 minutes, in order to allow trains to fit into the hourly takt. This may involve high average speeds, depending on the length of the link. Bern is farther from Zurich and Basel than Zurich and Basel are from each other, so in 1996-2004, SBB built a 200 km/h line between Bern and Olten; it has more than 200 trains per day of various speed classes, so in 2007 it became the first railroad in the world to be equipped with ETCS Level 2 signaling.

With this systemwide thinking, Switzerland has built Europe’s strongest rail network by passenger traffic density, punctuality, and mode share. It is this approach that Germany seeks to imitate. Thus, the Deutschlandtakt sets up control cities served by trains on a clockface schedule every 30 minutes or every hour. For example, Erfurt is to have four trains per hour, two arriving just before :30 and leaving just after and two arriving just before :00 and leaving just after; passengers can transfer in all directions, going north toward Berlin via either Leipzig or Halle, south toward Munich, or west toward Frankfurt.

Flight-level zero airlines

Richard Mlynarik likes to mock the idea of high-speed rail as conceived in California as a flight-level zero airline. The mockery is about a bunch of features that imitate airlines even when they are inappropriate for trains. The TGV network has many flight-level zero airline features: tickets are sold using an opaque yield management system; trains mostly run nonstop between cities, so for example Paris-Marseille trains do not stop at Lyon and Paris-Lyon trains do not continue to Marseille; frequency is haphazard; transfers to regional trains are sporadic, and occasionally (as at Nice) TGVs are timed to just miss regional connections.

And yet, with all of these bad features, SNCF has higher long-distance ridership than DB, because at the end of the day the TGVs connect most major French cities to Paris at an average speed in the 200-250 km/h range, whereas the fastest German intercity trains average about 170 and most are in the 120-150 range. The ICE network in Germany is not conceived as complete lines between pairs of cities, but rather as a series of bypasses around bottlenecks or slow sections, some with a maximum speed of 250 and some with a maximum speed of 300. For example, between Berlin and Munich, only the segments between Ingolstadt and Nuremberg and between Halle and north of Bamberg are on new 300 km/h lines, and the rest are on upgraded legacy track.

Even though the maximum speed on some connections in Germany is the same as in France, there are long slow segments on urban approaches, even in cities with ample space for bypass tracks, like Berlin. The LGV Sud-Est diverges from the classical line 9 kilometers outside Paris and permits 270 km/h 20 kilometers out; on its way between Paris and Lyon, the TGV spends practically the entire way running at 270-300 km/h. No high-speed lines get this close to Berlin or Munich, even though in both cities, the built-up urban area gives way to farms within 15-20 kilometers of the train station.

The importance of absolute size

Switzerland and the Netherlands make do with very little high-speed rail. Large-scale speedups are of limited use in both countries, Switzerland because of the difficulty of getting Zurich-Basel trip times below half an hour and the Netherlands because all of its major cities are within regional rail distance of one another.

But Germany is much bigger. Today, ICE trains go between Berlin and Munich, a distance of about 600 kilometers, in just less than four hours. The Deutschlandtakt plan calls for a few minutes’ speedup to 3:49. At TGV speed, trains would run about an hour faster, which would fit well with timed transfers at both ends. Erfurt is somewhat to the north of the midpoint, but could still keep a timed transfer between trains to Munich, Frankfurt, and Berlin if everything were sped up.

Elsewhere, DB is currently investing in improving the line between Stuttgart and Munich. Trains today run on curvy track, taking about 2:13 to do 250 km. There are plans to build 250 km/h high-speed rail for part of the way, targeting a trip time of 1:30; the Deutschlandtakt map is somewhat less ambitious, calling for 1:36, with much of the speedup coming from Stuttgart21 making the intercity approach to Stuttgart much easier. But with a straight line distance of 200 km, even passing via Ulm and Augsburg, trains could do this trip in less than an hour at TGV speeds, which would fit well into a national takt as well. No timed transfers are planned at Augsburg or Ulm. The Baden-Württemberg map even shows regional trains (in blue) at Augsburg timed to just miss the intercity trains to Munich. Likewise, the Bavaria map shows regional trains at Ulm timed to just miss the intercity trains to Stuttgart.

The same principle applies elsewhere in Germany. The Deutschlandtakt tightly fits trains between Munich and Frankfurt, doing the trip in 2:43 via Stuttgart or 2:46 via Nuremberg. But getting Munich-Stuttgart to just under an hour, together with Stuttgart21 and a planned bypass of the congested Frankfurt-Mannheim mainline, would get Munich-Frankfurt to around two hours flat. Via Nuremberg, a new line to Frankfurt could connect Munich and Frankfurt in about an hour and a half at TGV speed; even allowing for some loose scheduling and extra stops like Würzburg, it can be done in 1:46 instead of 2:46, which fits into the same integrated plan at the two ends.

The value of a tightly integrated schedule is at its highest on regional rail networks, on which trains run hourly or half-hourly and have one-way trip times of half an hour to two hours. On metro networks the value is much lower, partly because passengers can make untimed transfers if trains come every five minutes, and partly because when the trains come every five minutes and a one-way trip takes 40 minutes, there are so many trains circulating at once that the run-as-fast-as-necessary principle makes the difference between 17 and 18 trainsets rather than that between two and three. In a large country in which trains run hourly or half-hourly and take several hours to connect major cities, timed transfers remain valuable, but running as fast as necessary is less useful than in Switzerland.

The way forward for Germany

Germany needs to synthesize the two different rail paradigms of its neighbors – the integrated timetables of Switzerland and the Netherlands, and the high-speed rail network of France.

High investment levels in rail transport are of particular importance in Germany. For too long, planning in Germany has assumed the country would be demographically stagnant, even declining. There is less justification for investment in infrastructure in a country with the population growth rate of Italy or of last decade’s Germany than in one with the population growth rate of France, let alone one with that of Australia or Canada. However, the combination of refugee resettlement and a very strong economy attracting European and non-European work migration is changing this calculation. Even as the Ruhr and the former East Germany depopulate, we see strong population growth in the rich cities of the south and southwest as well as in Berlin.

The increased concentration of German population in the big cities also tilts the best planning in favor of the metropolitan-centric paradigm of France. Fast trains between Berlin, Frankfurt, and Munich gain value if these three cities grow in population whereas the smaller towns between them that the trains would bypass do not.

The Deutschlandtakt’s fundamental idea of a national integrated timed transfer schedule is good. However, a country the size and complexity of Germany needs to go beyond imitating what works in Switzerland and the Netherlands, and innovate in adapting best practices for its particular situation. People keep flying domestically since the trains take too long, or they take buses if the trains are too expensive and not much faster. Domestic flights are not a real factor in the Netherlands, and barely at all in Switzerland; in Germany they are, so trains must compete with them as well as with flexible but slow cars.

The fact that Germany already has a functional passenger rail network argues in favor of more aggressive investment in high-speed rail. The United States should probably do more than just copy Switzerland, but with nonexistent intercity rail outside the Northeast Corridor and planners who barely know that Switzerland has trains, it should imitate rather than innovating. Germany has professional planners who know exactly how Germany falls short of its neighbors, and will be leaving too many benefits on the table if it decides that an average speed of about 150 km/h is good enough.

Germany can and should demand more: BMVI should enact a program with a budget in the tens of billions of euros to develop high-speed rail averaging 200-250 km/h connecting all of its major cities, and redo the Deutschlandtakt plans in support of such a network. Wedding French success in high-speed rail and Swiss and Dutch success in systemwide rail integration requires some innovative planning, but Germany is capable of it and should lead in infrastructure construction.



Read the whole story
thuringia
1714 days ago
reply
Stuttgart, DE
Share this story
Delete

Es geht um mehr als um weitere vier Jahre GroKo

1 Share

Große Koalitionen gefährden die Demokratie und haben stets die politischen Ränder gestärkt, im Parlament und außerhalb. Auf die erste große Koalition der Nachkriegsgeschichte folgte die RAF, nach den zwei großen Koalitionen der jüngsten Zeit haben wir die AfD als drittstärkste Kraft im Bundestag. Große Koalitionen können daher immer nur eine extreme Ausnahme darstellen, die es um fast jeden Preis zu vermeiden gilt. Gerade die SPD hat mit ihrer Zustimmung zur dritten großen Koalition innerhalb von vier Legislaturperioden die Ausnahme zur Regel gemacht. Bereits die geläufige Abkürzung GroKo macht mehr als deutlich, das man sich an den politischen Zustand irgendwie gewöhnt hat und der Ausnahmecharakter abhanden gekommen ist. Ich habe mich immer gefragt, warum die Entwicklung in Österreich nicht als Warnung funktioniert hat, erscheint sie doch geradezu eine Blaupause für das, was Deutschland noch bevorsteht.

Mit der jetzigen Entscheidung verfestigen die SPD-Mitglieder mittelfristig die Position der AfD als drittstärkste oder gar zweitstärkste politische Kraft in Deutschland. Stattdessen hätte man eine Minderheitsregierung dulden und damit eine lebendige parlamentarische Demokratie fördern können. Aber am Schlimmsten ist das Ergebnis für die SPD selbst. Man hat sich von Merkel zweimal einkochen lassen und sich dennoch für eine dritte GroKo entschieden. Vor diesem Hintergrund erscheint auch das Gerede der Parteispitze, man könne sich auch in der Regierung erneuen, geradezu grotesk. Dieses weiter so behindert den Neuanfang nicht nur, es verhindert ihn. Denn der Grund dafür, dass man gegen Merkel und die Union keine Wahlen mehr gewinnen kann, ist die große Koalition. Wie soll man sich abgrenzen und die Konturen schärfen, wenn man immer nur eine konsensorientierte Politik betreibt? Jede Kritik an Merkel wirkt wie eine Kritik an der eigenen Politik. Und genau deshalb hat Schulz im Wahlkampf kein Land gesehen. Immer dann wenn er anfing Merkel zu kritisieren, konnte sie auf die gute und einvernehmliche gemeinsame Politik verweisen. Wie die dringend notwendige Abgrenzung von der Union innerhalb der Regierung funktionieren soll, bleibt weiterhin unklar. Die Flucht in die nächste große Koalition, ohne jede Perspektive, ist für diese, noch in den 70’er Jahren progressive Partei, die einst mehr Demokratie wagen wollte, zumindest ein Armutszeugnis, wenn nicht bereits der Todesstoß.

Die wirkliche Ursache dieser Mutlosigkeit, ist aber möglicherweise das programmatisches Vakuum, über das der mediokre Koalitionsvertrag kaum hinwegtäuschen kann. Die Aufgabe der Sozialdemokratie wäre es, wie Georg Diez sehr treffend formuliert, eine emanzipatorische und gerechte Politik zu erfinden für den digitalen Kapitalismus des 21. Jahrhunderts.

Und auch wenn die Vergleiche mit Weimar kaum ziehführend sind, weil die gesellschaftlichen, wirtschaftlichen, politischen und rechtlichen Rahmenbedingungen sich deutlich unterscheiden, bleibt der offenkundige Zusammenhang zwischen dem Niedergang der SPD und dem Erstarken der Rechten eine Parallele, auf die man achten sollte. Denn der Erfolg der AfD hängt auch damit zusammen, dass sozial Schwächere sich von der SPD abgewandt haben und – gegen ihre eigenen Interessen – nunmehr AfD wählen. Und das hat nicht zuletzt mit dem Verlust politischer Glaubwürdigkeit zu tun. Glaubwürdigkeit ist etwas, woran es der SPD gerade im größtmöglichen Ausmaß mangelt. Man fragt sich unweigerlich, was sich genau an der Faktenlage geändert hat, seit der SPD-Vorstand –  es war nicht nur Schulz – sich zweimal einstimmig gegen eine große Koalition ausgesprochen hat. Weil die Antwort schlicht nichts lautet, versucht sich die Parteiführung der SPD auch erst gar nicht an einer Erklärung des Unerklärbaren. Was die Partei seit Oktober aufführt, taugt als Lehrstück dafür, wie man Politikverdrossenheit fördert und den letzten Rest an politischer Glaubwürdigkeit verspielt.

An dieser Stelle muss man aber einmal mehr auch die Rolle der Medien kritisch beleuchten, die mehrheitlich so getan haben, als gäbe es nur die Wahl zwischen GroKo und Neuwahlen, was bei der SPD-Basis mit Sicherheit ein Aspekt war, der mitschwang. Die logische Konsequenz der Ablehnung einer großen Koalition durch die SPD wäre freilich, allein aufgrund der verfassungsrechtlichen Situation, eine Minderheitsregierung Merkels gewesen, während Neuwahlen eher fernliegend waren. Die Vorzüge einer solchen Minderheitsregierung gegenüber einer GroKo habe ich anderer Stelle schon ausführlich erläutert.

Die SPD ist derzeit, weder auf Ebene der Mitglieder, noch der der Parteispitze in der Lage, über den Tellerrand zu schauen und zu erkennen, dass es um weit mehr geht, als die Frage, ob man nochmals vier Jahre den Juniorpartner von Merkel gibt, sondern darum, wie unser demokratisches System in Zukunft noch funktionieren kann und wird.

Read the whole story
thuringia
2232 days ago
reply
Stuttgart, DE
Share this story
Delete

The Aggregator Paradox

1 Share

Which one of these options sounds better?

  • Fast loading web pages with responsive designs that look great on mobile, and ads that are respectful of the user experience
  • The elimination of pop-up ads, ad overlays, and autoplaying videos with sounds

Google is promising both; is the company’s offer too good to be true?

Why Web Pages Suck Redux

2015 may have been the nadir in terms of the user experience of the web, and in Why Web Pages Suck, I pinned the issue on publishers’ broken business model:

If you begin with the premise that web pages need to be free, then the list of stakeholders for most websites is incomplete without the inclusion of advertisers…Advertisers’ strong preference for programmatic advertising is why it’s so problematic to only discuss publishers and users when it comes to the state of ad-supported web pages: if advertisers are only spending money — and a lot of it — on programmatic advertising, then it follows that the only way for publishers to make money is to use programmatic advertising…

The price of efficiency for advertisers is the user experience of the reader. The problem for publishers, though, is that dollars and cents — which come from advertisers — are a far more scarce resource than are page views, leaving publishers with a binary choice: provide a great user experience and go out of business, or muddle along with all of the baggage that relying on advertising networks entails.

My prediction at the time was that Facebook Instant Articles — the Facebook-native format that the social network promised would speed up load times and enhance the reading experience, thus driving more engagement with publisher content — would become increasingly important to publishers:

Arguably the biggest takeaway should be that the chief objection to Facebook’s offer — that publishers are giving up their independence — is a red herring. Publishers are already slaves to the ad networks, and their primary decision at this point is which master — ad networks or Facebook — is preferable?

In fact, the big winner to date has been Google’s Accelerated Mobile Pages (AMP) initiative, which launched later that year with similar goals — faster page loads and a better reading experience. From Recode:

During its developer conference this week, Google announced that 31 million websites are using AMP, up 25 percent since October. Google says these fast-loading mobile webpages keep people from abandoning searches and by extension drive more traffic to websites.

The result is that in the first week of February, Google sent 466 million more pageviews to publishers — nearly 40 percent more — than it did in January 2017. Those pageviews came predominantly from mobile and AMP. Meanwhile, Facebook sent 200 million fewer, or 20 percent less. That’s according to Chartbeat, a publisher analytics company whose clients include the New York Times, CNN, the Washington Post and ESPN. Chartbeat says that the composition of its network didn’t materially change in that time.

This chart doesn’t include Instant Articles specifically, but most accounts suggest the initiative is faltering: the Columbia Journalism Review posited that more than half of Instant Articles’ launch partners had abandoned the format, and Jonah Peretti, the CEO of BuzzFeed, the largest publisher to remain committed to the format, has taken to repeatedly criticizing Facebook for not sharing sufficient revenue with publications committed to the platform.

Aggregation Management

The relative success of Instant Articles versus AMP is a reminder that managing an ecosystem is a different skill that building one. Facebook and Google are both super-aggregators:

Super-Aggregators operate multi-sided markets with at least three sides — users, suppliers, and advertisers — and have zero marginal costs on all of them. The only two examples are Facebook and Google, which in addition to attracting users and suppliers for free, also have self-serve advertising models that generate revenue without corresponding variable costs (other social networks like Twitter and Snapchat rely to a much greater degree on sales-force driven ad sales).

Super-Aggregators are the ultimate rocket ships, and during the ascent ecosystem management is easy: keep the rocket pointed up-and-to-the-right with regards to users and publishers and suppliers will have no choice but to clamor for their own seat on the spaceship.

The problem — and forgive me if I stretch this analogy beyond the breaking point — comes when the oxygen is gone. The implication of Facebook and Google effectively taking all digital ad growth is that publishers increasingly can’t breathe, and while that is neither company’s responsibility on an individual publisher basis, it is a problem in aggregate, as Instant Articles is demonstrating. Specifically, Facebook is losing influence over the future of publishing to Google in particular.

A core idea of Aggregation Theory is that suppliers — in the case of Google and Facebook, that is publishers — commoditize themselves to fit into the modular framework that is their only route to end users owned by the aggregator. Critically, suppliers do so out of their own self-interest; consider the entire SEO industry, in which Google’s suppliers pay consultants to better make their content into the most Google-friendly commodity possible, all in the pursuit of greater revenue and profits.

This is a point that Facebook seems to have missed: the power that comes from directing lots of traffic towards a publisher stems from the revenue that results from said traffic, not the traffic itself. To that end, Facebook’s too-slow rollout of Instant Articles monetization, and continued underinvestment (if not outright indifference) to the Facebook Audience Network (for advertisements everywhere but the uber-profitable News Feed) has left an opening for Google: the search giant responded by iterating AMP far more quickly, not just in terms of formatting but especially monetization.

Critically, that monetization was not limited to Google’s own ad networks: from the beginning AMP has been committed to supporting multiple ad networks, which sidestepped the trap Facebook found itself in. By not taking responsibility for publisher monetization Google made AMP more attractive than Instant Articles, which took responsibility and then failed to deliver.1

I get Facebook’s excuse: News Feed ads are so much more profitable for the company than Facebook Audience Network ads, that from a company perspective it makes more sense to devote the vast majority of the company’s resources to the former; from an ecosystem perspective, though, the neglect of Facebook Audience Network has been a mistake. And that, by extension, is why Google’s approach was so smart: Google has the same incentives as Facebook to focus on its own advertising, but it also has the ecosystem responsibility to ensure the incentives in place for its suppliers pay off. Effectively offloading that payoff to third party networks both ensures publishers get paid even as Google’s own revenue generation is focused on the search results surrounding those AMP articles.

Google’s Sticks

Search, of course, is the far more important reason why AMP is a success: Google prioritizes the format in search results. Indeed, for all of the praise I just heaped on AMP with regards to monetization, AMP CPMs are still significantly lower than traditional mobile web pages; publishers, though, are eager to support the format because a rush of traffic from Google more than makes up for it.

Here too Facebook failed to apply its power as an aggregator: if monetization is a carrot, favoring a particular format is a stick, and Facebook never wielded it. Contrary to expectations the social network never gave Instant Articles higher prominence in the News Feed algorithm, which meant publishers basically had the choice between more-difficult-to-monetize-but-faster-to-load Instant Articles or easier-to-monetize-and-aren’t-our-resources-better-spent-fixing-our-web-page? traditional web pages. Small wonder the latter won out!

In fact, for all of the criticism Facebook has received for its approach to publishers generally and around Instant Articles specifically, it seems likely that the company’s biggest mistake was that it did not leverage its power in the way that Google was more than willing to.

That’s not the only Google stick in the news: the company is also starting to block ads in Chrome. From the Wall Street Journal:

Beginning Thursday, Google Chrome, the world’s most popular web browser, will begin flagging advertising formats that fail to meet standards adopted by the Coalition for Better Ads, a group of advertising, tech and publishing companies, including Google, a unit of Alphabet Inc…

Sites with unacceptable ad formats—annoying ads like pop-ups, auto-playing video ads with sound and flashing animated ads—will receive a warning that they’re in violation of the standards. If they haven’t fixed the problem within 30 days, all of their ads — including ads that are compliant — will be blocked by the browser. That would be a major blow for publishers, many of which rely on advertising revenue.

The decision to curtail junk ads is partly a defensive one for both Google and publishers. Third-party ad blockers are exploding, with as many as 615 million devices world-wide using them, according to some estimates. Many publishers expressed optimism that eliminating annoying ads will reduce the need for third-party ad blockers, raise ad quality and boost the viability of digital advertising.

Nothing quite captures the relationship between suppliers and their aggregator like the expression of optimism that one of the companies actually destroying the viability of digital advertising for publishers will actually save it; then again, that is why Google’s carrots, while perhaps less effective than its sticks, are critical to making an ecosystem work.

Aggregation’s Antitrust Paradox

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

This is the fundamental paradox presented by aggregation-based monopolies: by virtue of gaining users through the provision of a superior user experience, aggregators gain power over suppliers, which come onto the aggregator’s platforms on the aggregator’s terms, resulting in an even better experience for users, resulting in virtuous cycle. There is no better example than Google’s actions with AMP and Chrome ad-blocking: Google is quite explicitly dictating exactly how it is its suppliers will access its customers, and it is hard to argue that the experience is not significantly better because of it.

At the same time, what Google is doing seems nakedly uncompetitive — thus the paradox. The point of antitrust law — both the consumer-centric U.S. interpretation and the European competitor-centric one — is ultimately to protect consumer welfare. What happens when protecting consumer welfare requires acting uncompetitively? Note that implicit in my analysis of Instant Articles above is that Facebook was not ruthless enough!

The Ad Advantage

That Google might be better for users by virtue of acting like a bully isn’t the only way in which aggregators mess with our preconceived assumptions about the world. Consider advertising: many commentators assume that user annoyance with ads will be the downfall of companies like Google and Facebook.

That, though, is far too narrow an understanding of “user experience”; The “user experience” is not simply user interface, but rather the totality of an app or web page. In the case of Google, it has superior search, it is now promising faster web pages and fewer annoying ads, and oh yeah, it is free to use. Yes, consumers are giving up their data, but even there Google has the user experience advantage: consumer data is far safer with Google than it is with random third party ad networks desperate to make their quarterly numbers.

Free matters in another way: in disruption theory integrated incumbents are thought to lose not only because of innovation in modular competing systems, but also because modular systems are cheaper: the ad advantage, though, is that the integrated incumbents — Google and Facebook — are free to end users. That means potential challengers have to have that much more of a superior user experience in every other aspect, because they can’t be cheaper.2

In other words, we can have our cake and eat it too — and it’s free to boot. Hopefully it’s not poisonous.

  1. Instant Articles allows publishers to sell their own ads directly, but explicitly bans third party ad networks
  2. This, as an aside, is perhaps the biggest advantage of cryptonetworks: I’ve already noted in Tulips, Myths, and Cryptocurrencies that cryptonetworks are “probably the most viable way out from the antitrust trap created by Aggregation Theory”; that was in reference to decentralization, but that there is money to be made is itself an advantage when the competition is free. More on this tomorrow.
Read the whole story
thuringia
2243 days ago
reply
Stuttgart, DE
Share this story
Delete

Netoju no Susume – On Compersion and Virtual Identities

1 Share

Netoju no Susume anime review - Recovery of an MMO Junkie anime reviewYou know that feeling where you’re watching kids run around, laughing, and it brings you joy? Or perhaps when your best friend is celebrating a promotion at work, and you feel happy for them? Or, say, when you watch a romantic comedy, or an underdog story, and when the couple kiss or the protagonist overcomes all struggles, you fistpump and/or cheer? There’s a term that encapsulates this feeling, this emotion, which comes from the polyamory circles, and that term is “compersion,” take to mean, “Joy at the joy of others.”

To some degree, one could say that all romantic comedies operate off of our desire to see the couple hit it off, but while some romantic sub-genres (see Harem RomComs, as per my write-up on Nisekoi) work more off of wanting the story to take its “natural pathway,” some shows, such as last season’s Netoju no Susume (or either “Recovery of an MMO Junkie” or “Recommendation of the Wonderful Virtual Life” in English), really do bank on us feeling compersion for the characters, and desiring them to be happy, because our own happiness depends on it (to some degree, don’t get too crazy here).

(This is a Things I Like post, it’s not a review, but more a discussion of the show and of ideas that rose in my mind as a result of watching the show. There will be spoilers for the entire show.)

But wait, how can we feel compersion for people who don’t exist? Why do we feel joy about a story that is not only technically over before we visit it, but where the outcome is a foregone conclusion, as is the case in most romantic stories? Well, luckily for us, the subject matter of Netoju no Susume is actually conducive to answering this question, and is the second prong of this piece. For those who need a short primer, Netoju no Susume revolves around a woman in her thirties who quit her job and is now a NEET (Not in Education, Employment, or Training), who spends her days playing an MMORPG (think World of Warcraft), and the person she meets and their relationships for one another. Oh yeah, she’s playing a man inside the game, and the man she meets outside the game is her female partner inside the game.

Right, people who don’t exist, right? Before we go further, I’d like to take an aside and mention I’ve played in roleplay chatrooms since late 1999 to late 2004. I know multiple couples who met online, going as far as crossing continents to marry. Some of these couples are still facebook friends of mine. I am also familiar with instances where the couple meeting offline had caused a relationship to implode, people who faked their deaths to avoid revealing the truth to one another or because they were emotional vampires, and yes, even people who lied about their sex and gender online. I’m no stranger to internet romance. The question of trust is paramount, but then again, which relationship is it not the paramount question in? And before we close this extended aside, I’d like to note that I was quite happy with all the realistic touches the series interspersed to how such interactions work, both between people, and inside the participants’ heads.

Netoju no Susume anime / Recovery of an MMO Junkie anime - Sakurai Yuuta asking about him and Lily

Asking about him and his in-game avatar.

So, here we are. Virtual people. How much of a difference is there between people we meet online, to characters who exist only within a story? Let’s take back. You know the “I was only playing at being a bully!” line? Or, “This person, that has those emotions, it’s not really me?” I’m going to spoil it to you all right here, rather than go through more paragraphs: It’s always you. Now, you’re not just the you of one interaction. You’re the you that is respectful to your parents, but also the one who keys your neighbour’s car. The person who stops to feed stray cats? The person who doesn’t mind honking to people crossing the street slowly? The person who tells himself he could’ve done better if only he tried? You, you, you. It’s always you. It’s all you.

Speaking of “virtual selves,” or “unreal selves,” let’s go for a couple more cases, starting from the more far-fetched to the less far-fetched one. There’s at least one theory that in dreams, all the characters you engage with actually stand in for your own self. Even if that theory doesn’t have much to recommend it, and the value of one’s sleeping dreams (as opposed to daydreams) is questionable. But it brings to mind some theories on how the Self is formed – not through how others perceive us, but through how we perceive others’ perception of us. And so, the “virtual self” is made real, and the others, who are always virtual to us, are real in how they shape us.

The other case would be books. Once more I’m going to go to a quote. “We get out of books what we put in.” Books tell us things about ourselves. It’s a question whether what we get out is what the author put in, but it’s immaterial here. Some of the things we felt but didn’t accept. Other times, a mask we wear might shift as a result of a book. But even to just see the book in different ways, to analyze and relate to the characters, already demands a certain point of contact between us and them. We can’t actually gain much out of characters we share no points of contact, no perspective, no masks with. Gatchaman Crowds is a show I love deeply. One of the reasons I like it so much is how many different angles you can approach it from. My first write-up on the first season approached it from the perspective of masks. Masks are like identities. Masks are like other people. That is to say, masks are a prism through which we approach things, and through which others approach us.

Netoju no Susume anime / Recovery of an MMO Junkie anime - Sakurai Yuuta wishing he were Lily

Not realizing, or accepting, that he /really/ is.

But approach is not the self. There’s still only one person underneath. That’s you, remember? This is very relevant to people who try to convince themselves of something, such as our two protagonists in the anime. Sakurai, the man, tries to convince himself that he can detach himself emotionally from Moriko (the real-life woman), while Lily (his character) can still remain sort of emotionally involved with Hayashi (Moriko’s character). He tries to tell himself those feelings of warmth and camaraderie he felt while playing with Moriko aren’t for her, but for her character. But just as Lily is him, Hayashi is Moriko. There are no two sets of people here, just two sets of masks, worn by a single set of people.

Moriko, well, she presents one face to the outside world, one that is withdrawn, after being burnt, while being thankful of her online friends. I don’t even need to spell it out here. Moriko needed the support she now receives online from the people who surrounded her at her old job. Not getting it, she had to run away. It’s not two people who act differently based on the social context, but the same person who is freed to act differently based on the support they receive – based on the mask they are presented from without.

So, compersion? I guess I’ll tie it up for those who still need it. I’ll be using another old idiom to do so, obviously. “You need to love yourself before you can love others,” or “You need to love yourself before you can accept others’ love.” In light of this piece, both of them are sort of the same, aren’t they? It’s all about masks. You love others because you love how you see them in yourself, and how you see yourself in them. You love fictional characters and gain joy in their joy, because there is only one joy. Because you see yourself in them, and see them in you, or in a mask you could wear. You feel joy, because it is joyous. There is no separation. There is just joy.

Netoju no Susume anime / Recovery of an MMO Junkie anime - Morioka Moriko being excited

When these two dorks were happy and excited, I felt warmth spread within.

Compersion is used as a word that is opposite to jealousy. What is jealousy if not the inability to reconcile others with yourself? The claim that their masks and your mask, or your “real” face, do not actually align?

There are no virtual people. You’re always you. And the characters you love, and love loving? Are real in the only way that matters, as a mask to mirror you, and as an object of your directed emotions.

So, dear readers, how do you relate to the reality of fictional characters? How do you relate to your virtual friends? What do you think about the similarities and differences between these two groups?


With the above piece done, I’d like to take the space for a couple of other things. First, the blog had gained its donation goal (to cover the costs involved in running it over the past 8.5 years), and I’m thankful in the extreme. To all donors, readers, commentators, and friends.
Second, you’ll note that the promised Anime Season Preview post did not materialize. I realized I need at least two weeks to get it done, and I just couldn’t find the time. Also, while I think I added a different approach and depth of research to the analysis than most, it didn’t feel like content unique to me, and it became dated very quickly. I chose to spend my time writing posts that are “Guy-pieces.” This is the first such piece I’ve written in about a year and a half, and such pieces fill me with joy for years to come. Expect more of them, soon.

Now, I’d like to say that while I covered two topics in the above piece, I also wanted to cover the topic of “Adult Romance in Anime,” but I chose to not have another piece that takes over 3,000 words, and that topic more than merits its own space. I’d like to say that watching this show filled me with warmth and joy, and I enjoyed it. The side-characters didn’t get enough space, the voice acting and animation weren’t anything spectacular, and the plot took to pausing to not run into the conclusion too quickly. And yet, the show delivered on its two protagonists, and as someone with much experience both in online romance, and MMORPGs, both the characters involved felt real (again, except for a few moments where they were turned into pre-teens to stop plot progression), and the world around them felt real in its virtuality. I can easily recommend this show to all audiences.

Finally, if you’d like an older, and exquisitely-done story about online identities and romance, check out Veritgo’s USER mini-series. It’s darker, but also hopeful. It’s about the days of IRC-esque roleplaying rooms, and it’s just really good.

P.S. Thanks to Itai, my friend who introduced me to the term “Compersion.” I like it. It’s useful, and pretty.











Read the whole story
thuringia
2277 days ago
reply
Stuttgart, DE
Share this story
Delete

When Bright Colours Make (or Break) an Anime

1 Share

color 1

The Asterisk War was one of the most-watched shows on Crunchyroll in 2015. Now, before anyone goes on about the shit taste of the average Crunchyroll user, consider why viewers would have been drawn to the show beyond the magic high school premise.

The show looks pretty.

The bright, vivid colour palette is immediately striking, especially when the characters use their magical abilities. I haven’t actually watched the show myself, but the aesthetic looks so much more eye-catching than the likes of Absolute Duo and Magical Warfare. If you had to pick one magic high school show to watch from the key visuals, The Asterisk War has the visual personality to stand out in the crowd.

Colours are a huge part of the anime experience, and nobody knows that better than Aiko Matsuyama, the colour designer of The Asterisk War. She once said that the job is all about capturing the director and character designer’s vision. It’s also important to recreate the feel of the source material, because the viewer will notice if something is off.

color 2

The Asterisk War’s bright colour scheme carries over from the light novels.

As striking and as appealing as these visuals are, however, I personally don’t like them that much. The OP is a prime example of how the colours don’t really work that well in practice. They’re flashy, but the overly saturated colour scheme lacks consistency, especially when combined with special effects animation. Basically, it’s too much.

Bright colours make an anime stand out, but they don’t necessarily make it betterNo Game No Life, for instance, was polarising for exactly the same reason; the extreme colour scheme may have been deliberate, but it can only be described as an assault to the senses.

Colour designs tend to work best when their strengths are invisible to the viewer; they ought to be eye-catching, but without drawing attention away from the overall composition of a shot. This is something that Matsuyama has noted and aspires towards. She has said that she can’t measure up to the woman who inspired her, and whose influence can be seen in the colour designs of The Asterisk War.

That person is Kumiko Nakayama, the colour designer of Macross Frontier.

color 3

Despite airing ten years ago, the visuals of Macross Frontier continue to stand out to anime viewers. This arguably has more to do with its vivid colour scheme than with its key animation, which was prone to inconsistencies. The colours in Macross Frontier don’t overwhelm the viewer either; the show saves its brightest colours for the concert scenes, making the two divas stand out whenever they take to the stage.

The Wings of Goodbye movie looks even more beautiful. In the final concert/battle scene, the palette starts off dark but eventually gets brighter as the sun rises and the hero stages his stirring comeback. The colours always perfectly match the emotions of the scene, enhancing their impact.

Note: Spoilers in the clip below.

I think that one of the strongest things about Macross F is how the colours give consistency to the special effects and 3D animation. If you look at the 3D mecha designs on their own, they look as if they have no place in a 2D-animated show. But somehow, it all works way better than most shows with 3D mecha fights. Good colour coordination is a big reason why the CG doesn’t suck ass.

Colour coordinators typically work closely with the background director to ensure consistency across the entire board. If the CG objects and layouts integrate at all with the 2D animation, that’s because the artists have paid close attention to the overall colour palette and tone of the scene. You can see why colour directors often say that communication is the most important part of their job.

color 5

Aquarion EVOL (which Nakayama also handled the colour direction for) was really good at the 2D-3D integration too. In the shot above, the mech fits into the scene because you can see part of it reflecting the moon’s light. If you look closer, you might see that the mech stands out a bit too much against the city view in the background, but it’s subtle enough that it doesn’t detract from the striking beauty of the scene.

Watching Aquarion EVOL gave me a lot of respect for Nakayama – the show looks so darned pretty all the time. The colours are consistently eye-catching and attractive, but there’s always enough balance in the compositions to not make them distracting. Take the merging scenes, for instance. The characters turn naked and their bodies are encompassed with light, but there’s darkness in these sequences to create texture.

color 6

Compare this to something like Absolute Duo, which goes for a similar contrasting effect at key moments, but fails at integrating the colour scheme throughout the entire shot. The overall effect lacks depth, as if the character is simply pasted over the backgrounds.

Advancements in digital technology have made digital colouring easier, but in the end, good colouring still comes down to having a strong vision and a sensitivity to how colours and tones mesh with each other. Nakayama started off colouring cels, and even with the constraints of hand-drawn colouring, she was able to use a nuanced combination of colours to help create memorable shots.

color 8

From Ai no Kusabi (1992)

Of course, these are just my subjective impressions, and you’re welcome to disagree with my opinions on which anime have good colours. Are there any anime that stand out to you, colour-wise?
















Read the whole story
thuringia
2519 days ago
reply
Stuttgart, DE
Share this story
Delete

The Great Unbundling

1 Share

To say that the Internet has changed the media business is so obvious it barely bears writing; the media business, though, is massive in scope, ranging from this site to The Walt Disney Company, with a multitude of formats, categories, and business models in between. And, it turns out that the impact of the Internet — and the outlook for the future — differs considerably depending on what part of the media industry you look at.

The Old Media Model

Nearly all media in the pre-Internet era functioned under the same general model:

img_0107

Note that there are two parts in this model when it comes to making money — distribution and then integration — and the order matters. Distribution required massive up-front investment, whether that be printing presses, radio airplay and physical media, or broadcast licenses and cable wires; the payoff was that those that owned distribution could create money-making integrations:

Print: Newspapers and magazines primarily made money by integrating editorial and advertisements into a single publication:

img_0113

Music: Record labels primarily made money by integrating back catalogs with new acts (which over time became part of the back catalog in their own right):

img_0111

TV: Broadcast TV functioned similarly to print; control of distribution (via broadcast licenses) made it possible to integrate programming and advertising:

img_0110

Cable TV combined the broadcast TV model with bundling, a particular form of integration:

img_0109

The Economics of Bundling

It is important to understand the economics of bundling; Chris Dixon has written the definitive piece on the topic:

Under assumptions that apply to most information-based businesses, bundling benefits buyers and sellers. Consider the following simple model for the willingness-to-pay of two cable buyers, the “sports lover” and the “history lover”:

screen-shot-2012-07-05-at-6-24-27-pm

What price should the cable companies charge to maximize revenues? Note that optimal prices are always somewhere below the buyers’ willingness-to-pay. Otherwise the buyer wouldn’t benefit from the purchase. For simplicity, assume prices are set 10% lower than willingness-to-pay. If ESPN and the History Channel were sold individually, the revenue maximizing price would be $9 ($10 with a 10% discount). Sports lovers would buy ESPN and history lovers would buy the History Channel. The cable company would get $18 in revenue.

By bundling channels, the cable company can charge each customer $11.70 ($13 discounted 10%) for the bundle, yielding combined revenue of $23.40. The consumer surplus would be $2 in the non-bundle and $2.60 in the bundle. Thus both buyers and sellers benefit from bundling.

Dixon’s article is worth reading in full; what is critical to understand, though, is that while control of distribution created the conditions for the creation of the cable bundle, there is an underlying economic logic that is independent of distribution: if customers like more than one thing, then both distributors and customers gain from a bundle.

When Distribution Goes to Zero

A consistent theme on Stratechery is that perhaps the most important consequence of the Internet, at least from a business perspective, was the reduction of the cost of distribution to effectively zero.

The most obvious casualty has been text-based publications, and the reason should be clear: once newspapers and magazines lost their distribution-based monopoly on customer attention the integration of editorial and advertising fell apart. Advertisers could go directly to end users, first via ad networks and increasingly via Google and Facebook exclusively, while end users could avail themselves of any publication on the planet.

img_0114

For Google and Facebook, the new integration is users and advertisers, and the new lock-in is attention; it is editorial that has no where else to go.

The music industry, meanwhile, has, at least relative to newspapers, come out of the shift to the Internet in relatively good shape; while piracy drove the music labels into the arms of Apple, which unbundled the album into the song, streaming has rewarded the integration of back catalogs and new music with bundle economics: more and more users are willing to pay $10/month for access to everything, significantly increasing the average revenue per customer. The result is an industry that looks remarkably similar to the pre-Internet era:

img_0112

Notice how little power Spotify and Apple Music have; neither has a sufficient user base to attract suppliers (artists) based on pure economics, in part because they don’t have access to back catalogs. Unlike newspapers, music labels built an integration that transcends distribution.

That leaves the ever-fascinating TV industry, which has resisted the effects of the Internet for a few different reasons:

  • First, and most obviously, until the past few years the Internet did not mean zero cost distribution: streaming video takes considerable bandwidth that most people lacked. And, on the flipside, producing compelling content is difficult and expensive, in stark contrast to text in particular but also music. This meant less competition.
  • Second, advertisers — and brand advertisers, in particular — choose TV not because it is the only option (like newspapers were), but because it delivers a superior return-on-investment. A television commercial is not only more compelling than a print advertisement, but it can reach a massive number of potential customers for a relatively low price and relatively low investment of resources (more on this in a moment).
  • Third, as noted above, the cable bundle, like streaming, has its own economic rationale for not just programmers and cable providers but also customers.

This first factor, particularly the lack of sufficient bandwidth, has certainly decreased in importance the last few years; what is interesting about TV, though, is that it is no more a unitary industry than is media: figuring out what will happen next requires unpacking TV into its different components.

The Jobs That TV Does

In 2013 I wrote a piece called The Jobs TV Does where I posited that TV has traditionally filled multiple roles in people’s lives:

  • TV kept us informed
  • TV provided educational content
  • TV provided a live view of sporting events
  • TV told stories
  • TV offered escapism, i.e. an antidote to boredom

It was already obvious then that the first two jobs had been taken over by the Internet: only old people got their news from TV, and there was better and and broader educational content on YouTube or any number of websites than TV could ever deliver, even with 200 channels. The question I asked then was how long TV could maintain its advantage when it came to the last three jobs:

The disruption of TV will follow a similar path: a different category will provide better live sports, better story-telling, or better escapism. Said category will steal attention, and when TV no longer commands enough attention of enough people, the entire edifice will collapse. Suddenly.

I’d bet on escapism being the next job we give to something else, for a few reasons:

  • The economics of live sports are completely intertwined with the pay-TV model; this will be the last pillar to crumble
  • Networks still play a crucial role in providing “venture-funding” for great story-telling. Netflix is the great hope here
  • Escapism is in some sense indiscriminate; it doesn’t matter how our mind escapes, as long as it does. Yet it’s also highly personal; the more tailored the escape, the more fulfilling. This is why there are hundreds of TV channels. However, there will never be as many TV channels as there are apps.

I was right about escapism being on the verge of collapse, but the mechanism wasn’t so much apps as it was one app: Facebook.

Facebook, Snapchat, and Escapism

I wrote in The Facebook Epoch:

The use of mobile devices occupies all of the available time around intent. It is only when we’re doing something specific that we aren’t using our phones, and the empty spaces of our lives are far greater than anyone imagined. Into this void — this massive market, both in terms of numbers and available time — came the perfect product: a means of following, communicating, and interacting with our friends and family. And, while we use a PC with intent, what we humans most want to do with our free time is connect with other humans: as Aristotle long ago observed, “Man is by nature a social animal.” It turned out Facebook was most people’s natural habitat, and by most people I mean those billions using mobile.

Snapchat is certainly challenging Facebook in this regard, and one of the most interesting trends to watch in 2017 is if this is the year both companies finally start to steal away not just TV’s attention but also TV’s advertising.

Facebook is laying the groundwork to do just that; the company has been pushing video for a long time now, and recently added a dedicated video tab to its app. What has been missing, though, is an advertising unit that can actually compete with TV for brand advertising dollars; Facebook’s current advertising options are, both in terms of format but also in their focus on fine-toothed targeting, predominantly designed for direct marketing. Direct marketing has always been well-suited for digital advertising; the point of the ad is to drive conversion, and digital is very good at not only measuring if said conversion occurred but also in targeting customers most likely to convert in the first place.

Brand advertising is different; whereas direct marketing is focused at the bottom of the marketing funnel, brand advertising is about making end users aware of your product in the first place, or just building affinity for your brand as an investment in some future payoff. The mistake Facebook made for a long time was in trying to win brand marketing dollars by delivering direct marketing results: the company invested tons of time and money in trying to detect and track the connection between a brand-focused advertisement and eventual purchase, which is not only technically difficult — what if the purchase takes place months in the future, or offline? — but also completely misunderstood what mattered to brand advertisers.

I noted above that brand advertisers find TV to deliver a superior return-on-investment; with its focus on tracking Facebook was too concerned with the “return” at the expense of the “investment”. Specifically, taking advantage of Facebook’s targeting and tracking capabilities requires the continual time and attention of marketers; it was far more efficient to simply create a television commercial that reached a bunch of people at once and then track lift after the fact. This is why Procter & Gamble, the biggest TV advertiser in the world, scaled back its targeting efforts on Facebook.

Facebook is doing two things to change its value proposition for brand advertisers:

  • First, the company is reportedly on the verge of rolling out a new video advertising unit that will play in the middle of videos — kind of like a TV commercial.
  • Second, Facebook is focusing much more on being an advertising platform with massive scale than can also target — kind of like cable TV, but better — as opposed to a measurement machine that targets individuals and tracks them to the grocery store register.

That last point may not seem like much but it’s a noticeable shift: on last quarter’s earnings call COO Sheryl Sandberg focused on the fact Facebook made it possible for brand advertisers to do “big brand buys on our platform like they would do on TV, but make them much more targeted.”; exactly one year earlier the pitch was “personalized marketing at scale” and “measuring ROI”.

I think this is the right shift for Facebook, but it also highlight why Snapchat is very much its rival: thanks to Facebook’s ownership of identity the latter is unlikely to mount a serious challenge for direct marketing dollars (although it is — mistakenly in my opinion — building an app-install product); however, if identity is less important for brand advertising than simply scale, then Snapchat’s push for attention, particularly amongst young people, is very much a threat to Facebook.

Not that that is much comfort to TV: Facebook and Snapchat have peeled off the “escapism” job in terms of attention; doing the same in terms of advertising is a question of when, not if.

Netflix and Story-Telling

Meanwhile, Netflix is proving to be far more than a “hope”; as I described last year in Netflix and the Conservation of Attractive Profits, the company leveraged the commoditization of time enabled by streaming to own end users, creating the conditions to modularize suppliers — and that’s exactly what is happening.

What is interesting is that scripted TV is turning out very differently than music: instead of leveraging their back catalogs to maintain exclusivity on new releases, most networks sold the former to Netflix, giving the upstart the runway to compete and increasingly dominate the market for new shows. The motivation is obvious: networks have been far more concerned with protecting their lucrative paid-TV revenue than with propping up their streaming initiatives; the big difference in music is that the labels’ old album-based business model had already been ruined. It’s a lot easier to move into the future when there is nothing to lose.

The Great Unbundling

The shift of both escapism and story-telling away from traditional TV are noteworthy in their own rights; equally important, though, is that they are happening at the same time. Here is what the landscape looks like once TV is broken up into the different “jobs” it has traditionally done for viewers:

img_0115

First, the new winners have models that look a lot like the one that destroyed the publishing industry: by owning end users these companies either capture revenue directly (Netflix) or have compelling platforms for advertisers; content producers, meanwhile, are commoditized.

Secondly, all four jobs were unbundled by different services, which is another way of saying there is no more bundle. That, by extension, means that one of the most important forces holding the TV ecosystem together is being sapped of its power. Bundling only makes sense if end users can get their second and third-order preferences for less; what happens, though, if there are no more second and third-order preferences to be had?

To put this concept in concrete terms, the vast majority of discussion about paid TV has centered around ESPN specifically and sports generally; the Disney money-maker traded away its traditional 90% penetration guarantee for a higher carriage fee, and has subsequently seen its subscriber base dwindle faster than that of paid-TV as a whole, leading many to question its long-term prospects.

The truth, though, is that in the long run ESPN remains the most stable part of the cable bundle: it is the only TV “job” that, thanks to its investment in long-term rights deals, is not going anywhere. Indeed, what may ultimately happen is not that ESPN leaves the bundle to go over-the-top, but that a cable subscription becomes a de facto sports subscription, with ESPN at the center garnering massive carriage fees from a significantly reduced cable base. And, frankly, that may not be too bad of an outcome.


To be sure, it will take time for a lot of this analysis to play out; indeed, I’ve long criticized cable-cutting apostles for making the same prediction for going on 20 years. It’s a lot easier to predict unbundling than to say when it will happen — or how.

To that end, this is my best guess at the latter; as for when, the amount of change that has happened in just the last three years (since I wrote The Jobs TV Does) is substantial — and most of that change was simply laying the groundwork for actual shifts in behavior. Once those shifts start to happen in earnest there will be feedback loops in everything from advertising to content production to consumption that will only accelerate the changes, resulting in a transformed media landscape that will impact all parts of society. I’m starting to agree that the end is nearer than many think.

Read the whole story
thuringia
2642 days ago
reply
Stuttgart, DE
Share this story
Delete
Next Page of Stories