The Skinny on Stereoscopic Films, or, What’s Up With 3D?

This is one of those moments where I find myself on the inside of a phenomena which (increasingly) arouses strong opinions from members of the public. In this case, stereoscopic filmmaking – or 3D, for short (even though it’s not really 3D and tramples on a term which is used in animation for both stereoscopic and non-stereoscopic work).

I’m currently working on a 3D film in an age (or, more precisely, over the course of a year, starting with James Cameron’s Avatar) where 3D technology is being pushed as the next in-thing. And yet there are many detractors, some of whom have some good ammunition for their opinions.

As someone who has been intimately involved with a 3D production, from beginning to end (well, almost – we’ll be in theatres in October) I find myself more and more a spokesperson for the technology, if not for the studios who currently are trying to cram every release into a 3D format, whether or not they were meant to be that way.

Let me begin by saying that I enjoy the notoriety of being the resident expert on 3D technology at parties and barbecues whenever the subject arises. Now that I have that out of the way, allow me to bitch…

Everyone keeps asking me: is 3D here to stay? The answer is a conditional “yes”. The condition being that film studios understand two things: First, that you can’t take a 2D movie and make it 3D using brain-dead rotoscoping software and expect it to be a success; second, that you can’t continue charging more for 3D films and not deliver a product that is both a good example of 3D and a relatively good film to boot.

To elaborate:

1)  Since the release of Avatar, there seem to be just as many films released in theatres boasting 3D which were never shot in 3D, nor even envisioned in 3D prior to production. Some examples would be Tim Burton’s Alice in Wonderland and M. Night Shyamalan’s The Last Airbender. These films were taken by the studios after completion and put through a 2D-to-3D conversion process, using software to rotoscope the 3D effect, frame-by-frame, a process unsupervised by the director.

This process, while handy for converting short bits from 2D to 3D for films which originate in 3D, ignores a very large consideration for those producers and filmmakers who shoot in 3D from the outset: you have to plan to shoot in 3D from the start. You cannot take a script or a shot list for a 2D film and superimpose it onto a 3D film: your set design, your camera lenses, your blocking, your picture editing…so many things change as a result of switching from 2D to 3D. When you simply take a 2D show and auto-render it in faked-out 3D you get something which most viewers – critics and plebes alike – will say isn’t necessary. At worst, you get Clash Of The Titans – the current poster child for anyone with an axe to grind about 3D in general and post-converted 3D specifically. Not only was it a weak remake of the original (from what I hear), but the 3D post-conversion was done in two weeks. Two weeks. From what I hear, the subsequent “3D” is ridiculous to view.

2)  Considering that theatres charge a premium for 3D films (about $3 more than usual depending upon where you go – sometimes more), when a poorly rendered post-converted 3D film is released it damages the viability of an already vulnerable new technology. It’s one thing if a film is bad, but when it’s bad in two dimensions, bad in a crappily-rendered pseudo-third dimension, followed by the sucker punch of having to pay MORE to see it…you get my point. I hope. Movie audiences can be forgiving, but there comes a point of revolt which I can see happening if there aren’t enough 3D films released which originate on 3D. Furthermore, the studios do no service to themselves if they don’t make a point of clarifying this to audiences: why can’t they say when a film is originally shot in 3D? Isn’t that a selling point? Likewise, why not be honest and say when a film has been post-converted? If it’s a case that no one wants it to be known that their film was post-converted…then why post-convert to 3D in the first place? There’s certainly no audience I know that is clamouring for blocky cut-out shapes which look like they were poorly separated from the background using Photoshop. To summarize this point, content is king: the quality of content, not the volume of illegitimate content.

Up until Avatar (and god knows how I long for the day when another film takes its place as the “gold standard”), the greatest accomplishment in 3D technology was the few seconds of the guy in House of Wax, standing outside a theatre with a ping-pong mallet, knocking the ball directly toward the camera. You could imagine people ducking for cover at the time. That was 1953. From that point onward, 3D technology didn’t change, largely due to the format never winning over audiences: the films were oft-times gimmicky and there were never enough 3D films at any given time to make it feel as if the aesthetic was going anywhere. With the recent advent of digital cinematography, 3D is much easier (logistically and technically) to achieve. And while I would love someone to make “art” (are you reading this, Wong Kar Wai?), I’m happy if, for the time being, the format stakes its territory in the ghetto where its strengths have always been: action/sci-fi/fantasy – hey, if it works, why not? I don’t hear anyone clamouring for a 3D Terms of Endearment

Technicians and filmmakers are doing their part: they are taking a risk and trying to push forward innovatively with something daunting and new. Is 3D here to stay? Again, a conditional “yes”. What we need are studios and theatre chains to be honest with the audience and not do irreparable damage to the very thing they are hoping to profit from.

Share

Ryeberg

I should note that I’ve contributed a few pieces of work to an innovative website, called Ryeberg. The conceit of the site is user-contributed curated YouTube videos, narrated by personal essays on a variety of topics. I am in revision-mode currently, but when my stuff gets posted, I’ll let you know. In the meantime, feel free to visit.

Share

Honesty, After Dark

A continuous problem I have throughout the social media spectrum, the main culprits being Facebook and Twitter, is that – once you get to the point where you have your sister’s husband as your “friend”, once the guy you barely talked to in high-school is “following” you – you are no longer able to be, well, honest anymore. You cannot post as a status update “Gary is an asshole” without, ultimately, answering to Gary (or his pot-smoking live-in partner, or your co-workers who are largely idiots). You can’t even be vague: “Some guy I know is being an asshole.”. People will know who you’re talking about – context leaves clues people can find. Gary will get mad and want answers.

Oh, you can be honest, alright. You can lay it on the table all you want, but with the inevitable consequence of offending people and getting in trouble for it. In other words, there’s nowhere to hide online. This is why I wish there were Bizarro social media sites like, say, Facebook After Dark and Undercover Twitter. Places where you can say the things you really want to say about the people you’re “friends” with, the people you “follow”, without fear of recrimination. I think we would all be happier as a result.

You reading this, Gary?

(P.S. There is no “Gary”, in case anyone is wondering. I don’t really have co-workers either) – ed

Share

For *’s Sake

It’s been one of those battle-cries of mine the last while. Everything in the world, culturally-speaking (and I don’t necessarily mean high culture) seems to be evaporating into mindless bullshit.

The AV Club – a site I admittedly have a love/hate relationship with already – just posted an interview with actor Paul Giamatti. In the opening summary, the interviewer describes the plot of his latest film, which reads like a counterscript of 1999’s Being John Malkovich and yet there is no mention of this parallel anywhere in the article, something even Entertainment Tonight would do. The interviewer talks about this upcoming film with Giamatti as if it and his role – the John Malkovich role, if it were Being John Malkovich – were just soulless objects to be discussed out of necessity. In other words, it’s just like any other media-junket interview, like something you would read in InStyle or Chatelaine. Not that those examples are b-a-d, but when you pride yourself as better, especially savvy, tongue-in-cheek better, you shouldn’t even be in the same postal code as InStyle or Chatelaine if you want to retain your reputation.

The Motley Fool – again, a site previously known for being savvy, even though they deal with the stock market – now reads like Ain’t It Cool News, complete with arguments which, under rational analysis, seem completely idiotic and antithetical to what one would assume is their mission statement (ie. being different than the rest of those brain-dead-and-short-sighted Money sites).

Oh, and CNN. Not that they’ve ever been more relevant than a Reuters news ticker, but they’ve gone from mediocre to stupid by allowing one of their show hosts, Lou Dobbs, to continuously question the origin of Barack Obama’s citizenship, a paranoid suspicion virulent in the libertarian/right-wing fringe of the U.S. that has been repeatedly disproved (read: he doesn’t want Johnny Foreigner running and ruining the most-possibly-greatest-country-ever-in-the-world).

Now, one of the arguments I can imagine hearing is: well, Matt, in a 24-hour newsday (whether on TV or the Internet) when people expect constant information there inevitably has to be weaker material. To which I say: I understand, but I’d settle for less information over less hours (if need be), if it means the information will be consistent and better. After all, you are what you eat, and in this day and age we feed on media in an astonishingly unconscious and voracious manner.

Share

Article/Review: Digital Maoism, by Jaron Lanier

[from the I Wanted To Write About This Article a Month Ago Department]:

Jaron Lanier is a contributor and member of edge.org 1 (which I have listed in my sidebar links). Specifically, he offers his perspective on the evolution of technology and the internet and is credited as a “computer scientist and digital visionary”. In an essay posted May 30th, Digital Maoism: The Hazards of the New Online Collectivism, he tackles the rise of aggregator/meta-centric portals such as Wikipedia (which I also have listed in my sidebar links), where individual contribution he argues (and to this extent, responsibility) is obscured by an emphasis on a hive mind approach.

Lanier starts, appropriately enough, by sharing the fact that his Wikipedia entry refers to him as a film director, which is truthful only to the extent that he made one film, a decade and a half earlier. “Every time my Wikipedia entry is corrected,” he begins, “within a day I’m turned into a film director again. I can think of no more suitable punishment than making these determined Wikipedia goblins actually watch my one small old movie.”

And with this he sets his target. It isn’t, he insists, Wikipedia itself:

“No, the problem is in the way the Wikipedia has come to be regarded and used; how it’s been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it’s now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn’t make it any less dangerous.

Lanier’s strongest point, as I see it, is his contention that the collectivist, hive-driven format of sites such as Wikipedia (and extended in his essay to meta-meta-meta aggregators such as Digg and Reddit) continue a troubling trend toward aggregated, impersonally edited content over… well, content curated and written by identifiable humans.

The race began innocently enough with the notion of creating directories of online destinations, such as the early incarnations of Yahoo. Then came AltaVista, where one could search using an inverted database of the content of the whole Web. Then came Google, which added page rank algorithms. Then came the blogs, which varied greatly in terms of quality and importance. This lead to Meta-blogs such as Boing Boing, run by identified humans, which served to aggregate blogs. In all of these formulations, real people were still in charge. An individual or individuals were presenting a personality and taking responsibility.
[…]
“In the last year or two the trend has been to remove the scent of people, so as to come as close as possible to simulating the appearance of content emerging out of the Web as if it were speaking to us as a supernatural oracle. This is where the use of the Internet crosses the line into delusion.”

Lanier’s line of query unfolds to include the observation that the “meta” is now more popular and, in respect to Google News, more profitable than traditional media (newspapers in particular), yet no one standing next to the microphone is able to articulate the fact that popularity contests do not historically vet the best, but rather, what the collective believes is safest. And of course, nobody seems to want to say that the collective is just as culpable – in some ways more powerfully culpable – as individuals.

I highly suggest anyone interested in the social internet, its architecture and direction, give this essay a good read. Lanier’s observations move from the immediate suspects above to commentary on analogous movements, such as Linux 2, the “open” software movement, and the ever-ubiquitous MySpace. In many respects, it’s about time somebody spoke eloquently about the collapse of the human face behind these efficient portals.

However, I do have some issues. For one thing, the tangents never really weave into a comprehensive whole, making it feel much too cumbersome (and a page too long) to concisely support Lanier’s provocative thesis. There are many arguments using the financial marketplace as a comparison tool which, although in theory an applicable analogy, is probably the last example I would use if I were arguing for a more humanistic approach. In fact, for someone arguing for this approach, Lanier’s language sometimes bares the same technocratic opaqueness which I would argue obscures a better understanding of the debate.

For example, leading to his summary:

“Empowering the collective does not empower individuals — just the reverse is true. There can be useful feedback loops set up between individuals and the hive mind, but the hive mind is too chaotic to be fed back into itself.”

I realize the term “feedback loop” is an applicable simile when discussing communication, but it’s disconcerting when a term normally applied to specialty occupations – namely, software programming and audio engineering – should somehow become the standard upon which we seek to inspire a better world. Is this not, to some extent, asking a less-predictable society to be like a more-predictable tool?

Please read the essay for yourself and feel free to share your feedback in the comments section.

Please note: there is a discourse on the essay on the edge.org site here.

1. From their site: “Edge Foundation, Inc., was established in 1988 as an outgrowth of a group known as The Reality Club. Its informal membership includes of some of the most interesting minds in the world. The mandate of Edge Foundation is to promote inquiry into and discussion of intellectual, philosophical, artistic, and literary issues, as well as to work for the intellectual and social achievement of society.”

2. There is no official site for “Linux” (outside of linux.org, which looks exactly as it was when first uploaded many, many years ago…and no this is not a compliment). The link I provided goes to Ubuntu, which is the flavour of Linux I use at home. There are others.

Share

Blog: Safari issues


A quick note that yesterday I checked out this blog using Safari…and nearly screamed. While the content appears fine (formatting etc.), the sidebar data is pretty scrambled. In detail:

  1. The orange category tags beneath the profile photo do not appear at all.
  2. The previous articles are not in list format, but placed side-by-side in a paragraph.
  3. My copyright info footer is in the sidebar when it should be at the bottom of the page.

I’m sure there’s more, and I’m looking into it. However, to be honest, having worked on HTML formatting before I realise that sometimes you can’t please every web browser. So far, this blog looks consistent in the latest versions of Firefox, Internet Explorer, Opera, and Konqueror. The fact that Safari is having rendering issues is something I’d like to address, but quite frankly I can’t promise much of anything for the immediate future.

If you’re unsure whether you’re seeing this blog properly, below is an image of how it should look (taken from Firefox) – it shows at least the first half of the page for reference. I don’t want to be a browser fascist, but I would recommend that, if you currently use Safari, consider switching to Firefox (or Opera).


If you use Safari and don’t notice any issues, please let me know. Cheers.

UPDATE (May 17/06): I believe it’s safe to say that the above only applies to those people running Safari v1.x – I was checking the site from an old G3 iBook at the time. Anyone running v2.x of Safari shouldn’t experience any substantial incompatibilities. Carry on.

Share