Google I/O 2009 – Google’s HTML 5 Work: What’s Next?


Matt Papakipos:
Okay, let’s get started. Can you hear me okay? Okay, first I’d like
to introduce the talk, and then myself
and my co-presenter. So we’ll be talking today
about HTML5. As you saw from
this morning’s keynote, HTML5 is a big focus for us
at Google for Google I/O this year. So I wanna talk
really specifically about some of the details, what we’re doing specifically
in Chrome around HTML5. So my name is Matt Papakipos. I head up on the engineering
side the efforts to implement the HTML5 APIs
inside the Chromium code base. And that’s the code base out of which we build
our Chrome browser, so it’s very important to us. I’d like to also introduce
Ian Fette. Ian is the Product Manager
for the HTML5 work inside Chrome and many other aspects
of Chrome. Ian Fette: Thanks, Matt. Papakipos: I should also
mention we’ve got… questions for this talk
posted on Google Moderator, so you’re welcome
to submit questions there, and we’ll go through
some of those at the end. So I’ll leave ample time
for questions both live and for Moderator
at the end of the talk. So what got us here?
What are we trying to do? So browsers
started a revolution that continues to this day. We’ve all seen this accelerating
trend for applications to move from desktop machines
into the web. This all began with Netscape,
right, way back. And initially,
as you all recall, the web was initially
quite static, right? You could go to a site.
You could read some stuff. People started adding
dynamic content initially by adding it
on the server side, right? So you could do some sort of
form fill out, do a post, get something
back from the server that was somewhat dynamic. But there’s always that
server latency in the loop. And there were certain kinds
of dynamic stuff that were just infeasible
in that world. In ’95, Netscape
introduced JavaScript, which was quite revolutionary
at the time, right? The notion that the web could
run code in the client side was a radical concept. And I think if we think
way back, we can remember
some of those early applications
of JavaScript. They look kind of comical now. They were JavaScript clocks. They were little things where,
you know, it would say,
“Hello. Type your name.” You’d type your name
and it would say, “Hello, Matt.” They look pretty lame today. But it’s amazing to see
what’s grown out of that in the years to come. The other side of the problem that got some significant
innovation in the late ’90s was around network fetches. So initially,
there was XMLHTTP and then XMLHttpRequest which added the ability
for webapps to do asynchronous requests
to the server, get asynchronous data
coming back, which was the other sort of
missing piece of the puzzle. And again we saw
a bit of a lag between the introduction
of the feature and the apps. In this case, it wasn’t
until a couple years later with Gmail that we saw some of the first AJAX style
apps really take off, apps that were doing lots of
JavaScript in a client, lots of asynchronous
network requests. And then with the advents
of Google Maps, we started adding graphics
into this whole equation, using JPEGs dynamically, moving them around
in the client side, scrolling them. And that’s, at least for me, when this all started
to get quite interesting. ‘Cause I looked at the web
and said, “Wow. “It’s starting to do graphics. It’s starting to do real
client/server applications.” Now we find ourselves
in a world where developers want
even more… more radical capabilities. So things like playing video
inside web applications seems like a mainstream
idea today. It certainly wasn’t
ten years ago. There’s lots of other devices your PCU would also like
to use for web applications. Microphone and camera
might sound a little odd, but if you’re doing a video
conferencing application, that’s a very natural thing
to want to do from a web application. For many of these, we’re starting to see
browser plugins that allow you to do
some of these things. For example,
for Google Talk video we can do video conferencing
in Gmail, but we have to provide
a browser plugin and API plugin
to let you do that. It’s cool, it does work, but it’s not intrinsic
to the web itself. So that’s what we’re here
today to talk about. Other capabilities
people want in their webapps is better ability to control
file uploads, do multiple selections,
do drag and drop, things of that nature. Geolocation
is another big example. Phones started this, but we’re seeing it move
into laptops as well. My computer moves around a lot.
I’d like to know where it is. I can offer you a better path
to a local pizza restaurant if I know where you are. Offline capabilities. Google got started in this with
Gears a couple of years ago. And we’re very interested
in trying to figure out how can we make a webapp work
even when you’re off line? How can we do syncing
capabilities for webapps? Apps like Gmail let you use
your email when you’re on an airplane
and you have no conductivity. We’re now seeing this move
into mainstream HTML5, and I’ll talk about that
today. 3D graphics– another area I’m very
passionate about is 3D. I also run a project
called O3D, and we’re doing a tech talk
later today. O3D is a 3D graphics API. Again, as with Gears,
starting its life as a plugin, but we’re expecting this to move
into the mainstream browsers and into the standards
over time. Audio–there’s a lot left
to do in audio. There hasn’t been a lot
of progress in this area, but there’s many capabilities
your machine has in terms of doing positional
audio, multichannel audio, and mixing that the web really
doesn’t take advantage of today that we think are gonna
be interesting for webapps over the next few years. So what’s our goal, right? What’s the end point
that we want to get to? The end point I want to get to
is if a native app can do it, a webapp
should be able to do it. There are definitely
some challenges here around security, privacy, all the things that we know
are important on the web today. But we can make the web much
more compelling and interesting if we can bring
these capabilities to the applications. The challenge is–
or the opportunity is to also do that in a way
that is still conducive to taking advantage of the very positive aspects
of the web today– the fact that I don’t
have to install applications. The fact that
they’re always updated. The fact that they’re
integrated well with the cloud. They work well
when they’re online. All these are great things
about the web we want to make sure
not to lose in the process. So the process we’ve been
going through at Google is to spend a lot of time talking to application
developers–that’s you. So we’d like to have
a continuing dialogue with you about what you’d want to be
able to do from your webapps, figure out what native
applications people run, and figure out how can we move
those into the cloud, how can we move those
into the web? That’s the way we think
about the problem. The next stage
is implementation, so Gears is an example
of that. O3D is an example of that. We’re trying hard to innovate and create new APIs
that do new things and get them out there
in developers’ hands as quickly as possible
so you can try them out… with the full expectation
that you’ll want changes, you’ll want new capabilities. We need to figure out
how useful they are. And as we start
to understand that, we work on integrating it
inside our browser itself and then work
on standardization as the way to get it
into other browsers as well. None of this works
if it’s Chrome only. It’s very important to us
that this happens in all the major browsers, and the key to that
is standardization. So canvas is one of
the great success stories with HTML5 so far. I think some of you saw this
in the keynote, but canvas basically gives you
the ability to, from JavaScript code
in your webapp, do direct rendering
to the screen. So you have
pixel-level control– you can draw arcs and lines
and text in a way that looks
a lot like you would in a native application
when you want to use graphics, using an API like GDI or WPF or any number of different
rendering libraries. So it’s 2D graphics,
fully dynamic, callable from JavaScript. It’s a surface on which
you can draw 2D images so you can identify
part of your window that you want to do
this rendering onto and then make calls
from JavaScript to do the rendering calls. Now, there are a lot of
interesting uses of this being demonstrated in the client
developer sandbox out there. Some great examples
are Bespin, for example, which is using canvas for a lot of the rendering
of the UI itself. I put a simple example
up in the slide here to give the feel for
what it looks like to do… to use canvas from JavaScript,
and it’s quite simple. You locate the canvas object
in your HTML code and do getcontext. You now have a JavaScript
object that you can use to do 2D rendering. And now you just make
a series of calls on that object in order to render into
the rectangle on the screen. So fillrect will draw
a 50×50 rectangle positioned at (0,0)
in the coordinate system. We can draw rectangles,
we can draw arcs, we can draw text. So you have all the basic 2D rendering capabilities
you need to do a bunch of interesting
applications. So let’s jump into
a quick demo of canvas. Ian, do you want to talk
to this? Fette: Sure. So this demo in particular, and a number of demos
using canvas, are actually available
on Chrome Experiments, which you might have
seen earlier. One of my favorites
was actually done by a guy at Google named Dean,
and it’s called The Monster. And it starts off very simple.
He’s just drawing a square. But all of a sudden,
he’s evolving this. He’s using JavaScript
to control this, to rotate this box,
and all of sudden, it starts getting
more complex. It starts splitting up
into a bunch of polygons. It grows arms. And this thing just gets
amazingly complex. But the cool part is,
it’s all in JavaScript. This looks like, you know, something that I would find
in a 3D screensaver. This is not a simple, you know,
fill rectangle 50×50. This is incredibly complex. But if you look at the source, it’s all just
a bunch of rectangles, it’s all just a bunch
of JavaScript API calls. So here we’re providing
a very basic, fundamental API, and people are able
to build on top of that and do incredibly
complicated things, which I think is the most
exciting part. Papakipos: Thank you. Another area
that we’ve been working on is local storage. So in general, this is to
support offline applications– so applications that want to
work when you’re on the airplane when you’re in airplane mode, when you’re in an area
with bad reception and you don’t have Wi-Fi
available. So it’s a way to store data
client side. So it gives you access
to the local disc so that you can store
your offline email, you can store images
that you want to work with– whatever the webapp
wants to do. And the cool thing is,
since this is an API, it’s really up to your
application to figure out how you want
to use offline capabilities. So you should look at it
from the point of view of, well,
what is my application? Is it a photo editing
application? Is it an email application?
Is it a social application? And figure out what are
the aspects of that that make sense to do
while you’re offline. And certainly, for Gmail,
if you’re offline, writing email while offline,
reading email are perfectly sensible
things to do. And local store provides
the mechanism for doing that. So there’s two APIs–
the database API, which is basically
a SQLite interface that gives you
an SQL interface to do inserts and deletes and lookups inside
a SQL database stored on
the local computer. The other one
is the structured storage API. This is what’s called
local store, which is a somewhat simpler,
easier-to-use API that gives you effectively
a persistent hash table. So from your JavaScript code, you can store things,
key-value pairs from your webapp. When you do that,
if you then fetch them later from your same webapp, whether you’re online
or offline, you get the same values
back. So basically, a persistent
hash table that you can use to store data
on the local machine and get it reliably, whether you’re connected
to the network or not. This is something
we pioneered with Gears, originally,
a couple of years ago. And the cool part
is we’ve been successful over the last year
of getting it standardized by the W3C. So it’s now part of
the web standards themselves. And so we’re hard at work
right now integrating this into Chromium
and Chrome. And some of the other browsers,
in fact, ship this already. For example, Safari browser currently
supports the database API fully. So to give you a feel
for what it looks like, I put a code sample up here. So here we have–
we’re doing… we’re fundamentally
just executing an SQL statement. So if you’re familiar
with SQL, this is the language
or the syntax you use to do modifications
of the database, whether to look up or store
or whatever. So in this case we’re doing
executeSql. We’re doing a Select, and a Select in SQL lingo
is just a lookup. So if you look up SQL syntax, that’s how you look something up
in the database. So it’s saying fetch all fields
from MyTable. And SQL
is quite a flexible format. You can do searches with it
and find things that match. It’s quite a sophisticated way
to manage the local database. The good news is if you’re
familiar with doing server-side web applications, you’re already using
a database of some kind, and in fact, you’re probably
using SQL. So it’s actually quite easy
to learn this if you’re familiar with
server-side webapp development. The best demo here
of local store that I can think of
is probably Gmail offline. It’s worth giving that a try if you want to get a feel
for sort of what it feels like to use an offline
capable app. And the interesting demo
is just turn it on, wait for it
to sync your email, then turn off all your
network connections, open your Gmail,
and it’s still there, and it still works. Another interesting aspect of the offline application
problem is syncing. I just mentioned it. We all know that
you’ve gotta be really careful when you write web applications
not to lock up the browser. If I write a while (1) loop
in JavaScript, the whole thing locks up,
it stops refreshing the screen, bad things happen,
users are unhappy. This is quite a challenge when you’re doing
offline applications because if I turn on
offline in Gmail, I may need to sync, you know,
200 megabytes of email if I’ve never synced before. I don’t want to just do that
inside a loop in my main JavaScript thread, or I’m gonna lock up
the browser. So workers are a nice solution
to this problem. They give you the capability to run a background thread
effectively. So if you look at desktop
operating systems, they have threads,
and they have processes, right? They have ways that you can,
in your application, express concurrency through
additional threads of control. Fundamentally,
that’s what workers are. They give you the ability
to start up a background thread called the worker, and it runs in parallel
with your main display thread, which means
in the background thread, you can do as much IO
as you want and you don’t have to
worry about locking up
the browser display. So workers are quite rich. There’s there different
flavors of workers that are in process
and are…at various points of specification
and standardization. The most mature of them
right now are dedicated
and shared workers. So dedicated workers are effectively bound
to a single tab. So what this means is if I have a browser
with multiple tabs open or multiple windows open, if I start a worker up
in that tab, that worker’s specific
to that one tab. So if I have multiple
Gmail.com tabs open, this worker’s specific
to this tab and the other one
can start a separate tab. There’s a different flavor
of these called shared workers. And what this means is I have one background
worker thread that I want to share between
all the tabs at my domain. So if I have multiple tabs
open at Gmail.com, shared worker,
if you start that one worker up, it’s shared by all of the tabs
at that domain. So a good example of why you would want
to use a shared worker is something like
syncing your email, right? But if I have multiple tabs
open at the same email address and the same account, I’d just as soon have them
share the syncing rather than sync separately,
which would make no sense. The last category of them–
and this is the newest ones not quite finished with
standardization yet– is what we call
persistent workers. The idea here is there are
some things you’d like to do in web applications where you’d like background
execution ability, but you want to run it even
when the browser isn’t running. And a great example here
is Gmail itself, right? I would love to have
my Gmail sync, even if the browser’s
not running, even if Gmail’s not
up in a tab right now. I’d like to make sure
it’s synced all the time. Another good example
is notifications. There’s certain applications
like email where you’d love to be notified
when you get email even if you’re not
running email right now. So persistent workers are still in
a somewhat experimental stage, but there’s an ongoing
implementation effort in the Chromium and WebKit
code base to try to bring this out. And the development of this
is ongoing in WebKit and Google Chrome. So some versions of this
work in Safari now. It works in Gears now. Dedicated and shared workers
work in Gears. And we’re working in Chromium
and WebKit on persistent worker
implementation as well. Okay. Application cache. So the last piece of the puzzle
for offline capable applications is appcache. The problem that appcache solves
is, you know, picture yourself
going in the airplane. You’ve got no internet
connectivity. You type in Gmail.com. Well, you’re gonna get
a server not found error ’cause you’re not
on the network. So appcache
solves this problem. Appcache provides a mechanism where your application
can create a manifest file where you specify–
these are the URLs that I want to have access to
when I’m offline. So you literally
specify a list of these are all the URLs
at Gmail.com, or whatever your domain is, where these are the URLs I want
to access when I’m offline. Appcache then syncs those
for you so you have a local copy
on the machine, and thereafter,
once you’ve enabled this, if it finds itself
not in the network and you try to fetch
a specific URL, it’ll fetch the version
off the local disc. So this can include
your HTML content, it can include
your JavaScript content, and it can include JPEGs so you can basically sort of
freeze-dry your application on your local machine and then fetch it
as if you were on the network. Again, the key to how this works
is this manifest file, and what that means is
that you have full control over which parts of your
application are accessible while offline. There may be parts
of your application that don’t make sense to do
while you’re offline. For example, searching
a big database backend may not be feasible
when you’re offline, so you can decide
that parts of your application are cached and parts of it
are not cached– both statically
in the manifest file or you can dynamically
add things to that list
of cached entities. This is implemented
in WebKit today. It works in Safari today. We have it working in Gears and then implementation
in Chromium is ongoing, so we’re most of the way
through implementation of this in Chrome itself. Video. So moving
out of the offline arena into some of the newer areas, video is one
that we’re very excited about. So we recently launched this
in Chrome, in the dev channel of Chrome. So the sort of experimental
builds of Chrome now have video tag support. The problem this is solving
is sort of how do I play video
on my website and do it in a way
that’s intrinsic to the web? So what’s neat about
the video tag is without using
any plugins at all, I can create a web page where I just create
a simple video tag– so I said
video src=video.mp4, position it where I want it, and then there’s video playing
at that part of my web page. So putting video
in your web page becomes as simple as putting
an image in your web page. Today to put an image in,
you put an image tag, you say where you want
the JPEG, you’re done. Video now becomes that simple. There are built-in
playback controls, so it comes with a stop
and a pause and a play button. And there are controls
that you can use from HTML or from your
dynamic JavaScript to turn the controls
on or off, depending on whether
you want to use an alternate control
representation, you want built-in controls– it’s fully under your control
as the web developer. There’s also full script
control over the video itself, so if you want to control
when the video starts or what happens
when it finishes, you can set up callbacks,
you can trigger play, you can change video streams. There’s quite a flexible
JavaScript API. So Chrome–for Chrome
what we’ve done is we support
the MP4 container with H.264 video decoder
and an AAC audio decoder. So some nice, commonly
supported formats built in, and the Chrome dev channel
will be making their way out through to all
the Chrome users very soon. There’s also a set of formats
that Apple currently supports. So Safari has got
great support for video, and we’re excited
to see this moving into a bunch of browsers. The other codec set
that Chrome supports is an Ogg package
with Theora and Vorbis decoders. So we’ve got both
a traditional H264, the same format
that we use on YouTube, available today, and then there’s also
a full open source code stack for an IP free stack
for doing it as well, for Theora and Vorbis. Uh…okay. So let’s do a demo really quick
of video. It’s fun to give video demos,
’cause they are just cool. Fette: Great.
So as Matt said, video is actually
extremely simple to use. One of the…
you know, I like to say it’s as simple as putting
an image on a tag, on a page. So I want to actually
demo that. What I have right here
is a video on a page. It’s simple.
It’s a simple video. It’s a simple page.
[light music playing] And you can see that
the video is playing. If I hover over it,
I see I’ve got– Papakipos: Just gonna
kill the audio. Fette: Thank you.
If I hover over it, I’ve got the default controls. I didn’t have to build these.
They came for free. I’ve got pause over here.
I’ve got a little slider. And I can mess with the volume
settings if I want. And to show you
how simple this is, I want to show you
the source code for this page. Can make this a little bigger. Papakipos: As it should be.
It’s tiny. Fette: Yes. It’s very tiny. All I have to do is just
put in the video tag, point to my video, and I just put in two
optional attributes here. Controls gives me
those default controls that you saw, and autoplay makes the video
play as soon as it’s loaded. Dead simple. There’s some
alternate text here in case your video doesn’t–
excuse me– in case your browser
doesn’t support video, and that’s it. If you want, you can make
this more complicated. You can script it. You can change
the playback speed. There’s all sorts of things
you can do, but if you want to be
dead simple, you can. Papakipos: Cool. Thank you. Another big area
we’ve been working on is rich text editing. So…many of you have
tried this, I’m sure, and spent a lot of time
working on rich text. Rich text today on the web
is quite hard. What I mean by
rich text editing is, you know,
having a text entry field where the user
can look at text which is rich in the sense
of having bold and italics and fonts
and different point sizes, all the things that
we’re accustomed to expecting out of modern
document systems. Well, the only thing really
built into the web here today is text boxes, which are
very lean and mean, right, no control over fonts or bold
or italic or any of that. The web, for a long time,
has had this capability called contentEditable where you can specify
at a tag level that a certain sub-tree
of JavaScript–I’m sorry– of HTML is to be editable
in the browser window. The problem with this
is the implementations have been wildly inconsistent,
as many of you know. So how certain tags behave when they’re in
a contentEditable region changes dramatically
between browsers, whether they support selections
in copy and paste varies dramatically. What this means for us at Google
is that in Google when we do an application
like…like Google documents where we let you edit text
in a rich text format, we’re using contentEditable,
but we also have to download 200 kilobytes
of JavaScript code to handle
all the browser differences that are entailed from these
implementation differences and how contentEditable
is implemented in different browsers. So 200 kilobytes of code
may not sound like a lot, but that’s a big deal,
right? It’s a big deal if you’re
running this thing on a phone or you’re running
in a bad network connection or in a part of the world that has very slow
internet connectivity. And even for the rest of us,
with a, you know, right here with
a good network connection, it adds latency, right? This is adding latency
to page loads. It’s kind of crazy that
when I go to Google Docs and open a document,
I have to spend time downloading 200 kilobytes
of JavaScript code just to deal with
browser differences and rich text editing. So we’re very excited
about this area, and we’re doing quite a bit
of work in it right now. A lot of the work right now
is in specification. The weakness
of contentEditable has always been
that the spec has been weak, and this is why
the implementations are all so different. So we’re working quite a bit
right now to spec out with the W3C– a better spec for exactly
how should contentEditable work, how do we spec it in a way
where we have built-in support for cut and paste,
for good support for selections, for a consistent set of fonts
and bold and italics and all the things
that we make sense– that we expect in there? I mentioned exec command
in here. This is the ability to,
once the users made a selection, make it bold,
make it italic. This is an example
of one of the areas that’s very inconsistent
between browsers right now and one of the reasons
we have to download so much JavaScript code
to work around these issues. So the end point
we’re working to get towards is to make it so that
rich text editing is also a one-liner, right? It becomes as simple as
TextBox has always been, right? I just say, “Give me
a rich text area. Make it all fully editable.” And then I can expect
a consistent set of HTML that will come out
in response to user actions. So we want to make it easy
for the apps. So that’s still
in the early days, but expect more from us
over time. Notifications are another
big area we’ve been working on. Currently, for web application,
you have a limited set of ways that you can get
the attention of your users. You can do something
in the tab, right? But then you have the problem of what if they’re not
in the tab right now? So the other alternative
is alert boxes. Problem is,
users hate alert boxes, right? I come into work every day
and find an alert box open saying that I have
a calendar entry coming up, and my whole browser’s locked up
till I address that alert box. It’s not a great
user experience. Alert boxes
are not very flexible, and I don’t get alert boxes if I’m not running the browser
as well. So we’ve been working quite hard
on trying to figure out what is a less intrusive
event notification mechanism and one that has better control
over presentation and one that is more reliable? By control over presentation
I mean how do I do rich text in it,
right? The current alert box is– it’s really just
an ascii string. I’d like a little bit
more control. I’d like it to look like
the web and feel interesting. And I want it to work, regardless of which tab
or window has focus or whether it’s iconified or any number
of different factors. So we’re currently prototyping
some implementations of this. Again, this one is in
the very early stages. There’s no standard here yet. And we’re in
a prototyping stage and would love input
about what people want. But what we’re trying
to achieve is to create
a notification system where your web application
can just register, say I want to present
notifications, make API calls. When you want to present
a notification, know that the user
will get it reliably, have a way to get back
to your application. And it won’t lock up
the browser in the way that the alert
dialogue box does today. Web sockets. As you notice, we started
with sort of the stuff that’s furthest along
in implementation, the things that
we’re shipping already. Now we’re moving into
the realm of things that we’re just beginning
work on but areas we think
are very important and things
that we’ll be launching over the next few months. So Web sockets.
This is a very interesting area. So if you look at how network
communication works for web applications today,
it’s, frankly, really weird. It manages to work, but when you look at how
traditional desktop applications do client/server
and client/client communication, it’s a very different model,
right? You open a TCP connection
to a server, you have a persistent
connection open. You can send packets
both directions, receive packets. It’s always synchronous.
It’s all very simple. The web has a very different
model, as you know. You do these
asynchronous requests. When you want a persistent
connection to the server today, it’s actually
quite complicated. There are two challenges. One is how does the server
notify me asynchronously? Because there is
no persistent connection. The other one is how do I have
a consistent connection that I can do bidirectional
sends and receives on? So today what people do
is things like hanging post requests
as a mechanism to allow you to get
asynchronous notifications. It’s a weird way,
and it’s somewhat unreliable, and there are many challenges that I’m sure you’re familiar
with with hanging post requests. Web sockets are an API that are already specified
in the HTML5 spec that try to solve
this problem. So it’s an API
that looks a lot more like a normal TCP connection, so it’s a way to create
a connection back to your server from the client and then simply do
sends and receives and asynchronous callbacks
when you get receives. Sort of very sort of TCP-like
protocol for web applications. So it means it becomes easy to
get asynchronous notifications. You already have
the socket open. You just get
an asynchronous send. You do a receive.
You get a callback. It becomes much simpler
to implement… to implement
asynchronous notifies. It becomes much simpler to do
the sort of persistent connection with
bidirectional communication. There is a specification
already here for HTML5. There’s some work still ongoing
to refine defining the protocol for how connections work
with the server and several other aspects. And we’re beginning
prototype implementation so that we can start
to use this, start to experiment it, and get it in the hands
of folks like yourselves. But the goal here
with Web sockets is to make persistent
server communication, asynchronous notification
to the server much simpler. 3D graphics. So this is an area I’m
personally very excited about. So there are two big efforts that have been going on
in this area that we’ve been contributing to
with Chrome. One of them is Canvas 3D, So Canvas 3D is a system
developed by Mozilla, command line–
or, I’m sorry, an immediate-mode API
that allows developers to make OpenGL calls
from JavaScript. So OpenGL is one of the dominant
3D graphics APIs that we’d use from Windows
or on the Mac or on Linux. It’s the industry standard
for 3D graphics for native applications. So Canvas 3D is effectively
a set of JavaScript bindings that let you call OpenGL from
your JavaScript application. So this is running
as an extension in Mozilla today and has very cool stuff, and we’re actively
collaborating with them on figuring out
how to bring this into Chrome. It’s something
we’re very excited about. O3D is another effort
in this area. This is a plugin
that Google launched just over a month ago, and it’s a set of APIs for doing
3D graphics from JavaScript. This one is different
from Canvas 3D in that it’s a retain-mode API. So I guess the way to think
about the difference between Canvas 3D and O3D
is Canvas 3D is immediate mode, and O3D is retain mode. So there’s a good analogy there
between Canvas 2D and SVG. SVG is a retain-mode API whereas Canvas 2D
is an immediate-mode API. And so what we’re seeing
in 3D is that there’s a similar
dichotomy there, different kinds of APIs, depending on whether you want
immediate mode to make rendering calls
that render right now or whether you want to make
retain-mode rendering calls where you define a scene graph which is then rendered
by the system automatically in the same way that SVG is
or the DOM is, right? In a sense,
the DOM is a retain-mode API. I create the DOM, and then the browser takes care
of the rendering for me. So our expectation here is that this is gonna
continue to evolve. We’re in the very early
stages right now of working on standardization. There’s a couple of different
prototype implementations out there right now, and we expect it will take
several months to really get to a… to final set
of specifications. Could easily take years. So we’re actively working
with Apple and Opera and Mozilla
to move this forward, and it’s in
the very early stage. If you’re interested in this, I’d encourage you
to go to the… to the client demo pod
outside. There’s some pretty
neat games that folks have written
with this stuff, games and other applications
from ABC and Disney and some game developers
we’ve been talking to. And there’s a lot more, so I’ll say a lot on this slide
about some of the other things that we’re thinking about,
that we’re beginning work on. These are the things
that are the furthest away. Many of these are ones
we haven’t started on really at all yet, but we’ve realized
that they’re important areas, and they’re areas where we, the open source
browser community, need to make progress. And we’d love to hear from you about what the additional things
are that aren’t on our list. What else do you need? But let me go briefly through the ones that we know
are important. So there are some
that are already defined in the HTML5 specs that just
haven’t been implemented that we think are important. Geolocation
is a good example there. So there are
good specifications for how to make JavaScript API
calls from the client to figure out where you are
in the real world, right? What are my
geolocation coordinates? Those work by using backends
that are either cell tower based or Wi-Fi network based or based on GPS hardware
in your device if you have it. What’s neat about the API is you don’t have to know
how it works. You just make
a simple API call and say, “Where am I?”, and it gives you latitude
and longitude coordinates. Very simple. Forms2. So Forms2 attempts to improve
how forms work on the web. It’s, again,
one that is specified. There hasn’t been
much work on it. The main things
that are interesting about the Forms2 specification,
if you want to check it out, is it’s got consistent
HTML interfaces for things like calendars,
right? We’ve all seen
web applications where you click on a date field
and it pops up a calendar. Well, that’s always
a custom piece of code. There is no built-in thing
in the web that says
let me pick a date now, which is why all those
calendars look different. What Forms2 tries to do
is say let’s standardize
all that stuff. Let’s make it so that doing a form
that needs a date entry is just a one-liner. Where I say “it’s a date”,
I use the date tag and it gives me a calendar. I select the date,
and it fills it in in the form. So Forms2 aspires to do this for as many different
data types as possible, make it easy to enter them in a visually interesting,
sensible way with less code
from the app developer. Datagrid is another API
that’s been specced out that hasn’t been
implemented yet that we think’s
pretty interesting. So Datagrid attempts
to give you better control over
table-like layouts. So let me get into
some of the more exotic things. So in the last category
are things that are really not
very well defined right now. We think they’re broad areas
that are important and interesting to add
as capabilities for webapps, and we’d love to get
your comments and questions during the Q&A at the end
about these. So some of the ones
we’ve been thinking about are peer-to-peer APIs,
right? There are a lot of applications
I’d like to do like chat applications
or games, any number of different things,
customer support where I really… it doesn’t necessarily
make sense to do client/server-based
communication. I’d just as soon do
peer-to-peer communication. Why? Well, one issue
is latency, right? If I can do peer-to-peer,
it’s one hop. If I have to go
client to server to client, it’s two hops. If I’m playing a game with the guy sitting
at the desk next to me, that really makes no sense. It should just be one hop to the guy who’s 2 milliseconds
away on the network. So if we look at conventional
desktop operating systems, there are great
peer-to-peer networking APIs. The challenge for us
is to figure out how do we bring that
to the web? How can we create a peer-to-peer
networking API for the web that’s safe and secure that
gives you these capabilities? So it’s something
we’re starting to think about. Drag and drop support. This is another thing that doesn’t work very well
on the web today and is gonna require
some work from the browsers. I’d really like to be able
to drag files from my desktop into my web application and then have that
work reliably. And I’d like to be able to drag
it between web applications. If I’m looking
at an attachment that I just downloaded in
Gmail–or I haven’t downloaded, I see it in my Gmail,
I’d love to be able to just drag that
to another web application. Why can’t I do that today? Well, because we don’t
have the APIs. There’s no rocket science
here. We just need to figure out
how should the API work, how should
the security model work, how does the user
grant permission for this? The wonderful thing about
drag and drop is, in a sense, the user’s
already granting permission by doing the dragging. So it does seem like
a solvable problem, and it’s something
that we’re starting to… starting to look at
pretty seriously, and we’d love to hear
from you about. Webcam and microphone. As I mentioned, I mentioned the Google Talk
video conferencing example. There are lots of apps
people use today for doing video conferencing
on the web. None of them
are really webapps, right? So they may have a web
interface of some sort or a control panel,
but doing the heavy lifting of accessing the webcam,
accessing the microphone, running the compressors is always stuff
that people have to do either in native applications
or in plugins. What we’re trying to figure out
is how do we get to a world where I could write
an application like that purely in JavaScript? And it doesn’t look that hard. You really just need to add
JavaScript APIs for getting access
to the webcam, access to the microphone, support for our reasonable
array of codecs for live encoding and video. So it’s an area we’re starting
to look at quite seriously. O/S integration. So this again gets back sort of
to the drag and drop thing, but there’s a broad
category of things that native apps can do
that webapps can’t do that are hard to put in
any other bucket. So I’ve put them
in this bucket. These are things like,
you know… there are files I may have
on my desktop, like a doc file, where I could double-click
on the application, and I can get that
to launch Office today. But what if I want it to launch
into Google Docs? What if I want it to launch
into an online PDF previewer? So beginning to think of it,
how could we do that? How could we have
local file handlers that respond to OS events
like opening a file that are capable of launching
web applications, right? There are an increasing number
of web applications that can handle things like
doc files or XL spreadsheets or any number
of different formats. Photos, right?
Photo editing, for example. So we’re starting
to think about that level of OS integration. Another one I think about
a lot is CD-ROM drive or a USB key. There are various devices that are hard to get to
right now from the web. It would be nice to be able
to pop in a CD that has photos on it
and have it bring me to Picasa. We’re not there yet. So we’re beginning work on this,
starting to think about it, and we’d love to hear
your thoughts. Another broad area,
one of the last I’ll talk about, is uploads. Uploads on the web today are not a good experience
for users, right? If I decide to go upload
a bunch of photos to a website, if it’s a pure webapp, that’s a pretty
awful experience. I have to go to the website.
I say I want to upload a photo. It brings me a file open
dialogue box. I have to go find the thing
in my file system by traversing around
the directory hierarchy and clicking. I find it.
It closes. And now I say, “Oh, now I got
another 50 to add.” So now I do it again
50 times. It’s a really horrible
experience, so we’re working
to think through– how should this work? How should the user
select multiple files? How do we handle the upload? How can we handle
re-startable uploads? You know, a big issue is
the user will begin an upload, you know, close their machine,
walk away, it goes to sleep. I’ve just destroyed
the upload. How do we make this
a better experience? That’s something
we’re beginning to think about, and we think it’s a very
important area for users. Lots of evidence from that
in that there are many third-party
Windows applications that are sort of
uploader applications whether for Flickr
or for Facebook or whatever. We’re trying to figure out how could we move those
into the web itself? These are some of the areas
we’ve been thinking about in terms of things
that the web needs in terms of
client-side capabilities. But again, we’d love to hear
from you about what’s important, what’s not important, and what’s missing
from our list. So a quick summary.
My last slide. And then we’ll open it up
to questions. So here’s a rough timeline
of what we’re working on. So we’ve been hard at work
on video and launching that
into the dev channel. That’s sort of where
we are right now. We’re hard at work
on local store, appcache, workers database. A bunch of those
offline capabilities are already shipping in Gears
and have been for some time. We’re working on shipping them
in Chrome as fast as we can and checking them into
the Chromium code base. So if you look at
the source tree, you can see the check-ins
happening now, and those’ll be coming out
as soon as we’re done. The next-gen stuff
we’re working on– I mentioned a couple
of them today– are Web sockets. Another one I didn’t mention
is CSS3, so we’re doing a lot of work
with CSS3 to support some of
the advanced features like vertical text rendering
for Asian languages, support for things like Ruby and other things that are used
in a lot of languages with multiple character sets to show different
representations for the same text. Working on that. And then longer term,
working on some of these more exotic things
that I mentioned, so 3D certainly
being one of them. But also things like
peer-to-peer, better support for, you know,
clicking a doc file, having it open a webapp,
so this is our road map. These are the things
we’re thinking about. We’d love to hear from you. So I will open it up
to questions. And we also have
a Google Moderator forum we’ll pull up to answer
questions on the web. So why don’t I take
a live question first? We’ve got a microphone there
and a microphone there. So please walk up to the mic. We’d love comments
or questions. Fette: We’ve got 17 questions
on the Moderator. man: Um, one thing
I was wondering if it’s being thought about for the future
of browser support is window management. Because we build and maintain
a web application that we wrap in Chrome, and it’s designed to be used
on multiple– computers
with multiple monitors. So we’re dealing with
multiple windows, multiple tabs,
and even inside those some of them are frame sets, some of them have
high frames and stuff. And frankly,
it’s very difficult to find a window
from another window, depending on, you know,
some are popped up because they were, you know,
called by window.open. Some were created by somebody
browsed to the same domain on a URL,
that kind of thing, which is very hard to get,
you know, to be able to call–
hit a button in one window, find out where that other
window lives to actually call that action
and do some work. I don’t know. Has there been any discussion
or progress or thoughts on that? Papakipos: It’s not something
I’ve thought about personally. I think it’s a great question.
You’re right. If we look at how
desktop applications behave, they’ll often bring up
a set of windows, right? I’m thinking of like
a video editing application or a 3D graphics modeler,
right? It’ll bring up
the main editing window, some extra views,
a list view, a file view. You’re right–why can’t we do
that from web applications? That’s a great comment.
We’ll think about it. Cool. Cool idea. Yes? In red. man: Um…how is HTML5
going to be able to handle when a user does not have
a codec installed for video? Papakipos:
That’s a good question. So how do we handle
a missing codec? So we’ve defined the set
of codecs that we do support and the implementation
right now. Today it’s a fixed set
of codecs, so there isn’t any way
to sort of plug in another one. I mean, of course, you could
check in code into the code base and do edits
all open source. Yeah. I don’t have any
specific thoughts there. Ian? Fette: We’re definitely
trying to talk to other browser vendors, to other people
that are involved in the space of video editing software. And ideally,
we’d like to come to a place where there is… a set of things
that if you do, it just works. Like right now,
I know that if I make a JPEG, a GIF or a PNG
without transparency, it’ll just work
in pretty much all the browsers. We’re trying to get to a point where there’s something
similar for video. So we’re supporting H.264
and Ogg. There’s not
100% agreement yet. We’re working to try to get
some consensus and some sort of–
I don’t think this is gonna be something
that’s part of the standard, but we’re trying to get
some sort of industry best practice
and consensus around if you make a video like this,
it will just work. So that’s something
that’s really…on us but also on everyone else
to sort of, you know, help participate
in that discussion, help reach out to other people
involved in the video space, and help this discussion move
forward towards a consensus. man: Hi. My question is that the file name
you put in the video tag is limited
to the docu-file or can refer to
the remote file? Fette:
It’s just like an image. It can be…you know,
this one happened to be a local file on my computer, but I could say HTTP
someothersite.com/video.mp4, and assuming
that they didn’t have some HT access rule
restricting it, it would work. Papakipos: You can also do HTPS.
I don’t know if FTP works. If you’re brave. [man speaking indistinctly] Fette: So streaming is something
that we still have to look at. Right now we don’t support
any streaming protocols, but it’s something
that we’re looking at. We should also be sure to take
some of the Moderator questions. Papakipos: Yeah, let’s take
a couple questions from Moderator to be fair
to the web folks here. The first one?
Okay. Okay. “Do you believe
it’s safe enough “for us to start
developing sites or webapps leveraging HTML5?” That’s a good question. I think the ones
that are implemented in multiple browsers
are certainly… are certainly things
I would be comfortable using in a webapp
that I shipped. I mean, you gotta figure out
what makes sense for you. The way that the security
policies around these APIs work tends to be fairly
browser-specific, right? In many cases, for things
that are privacy-related or things that may consume
local resources like disk space, browsers tend to go
with interfaces where they bring up
a dialogue box to check with the user
on first use, to ask the user explicitly
to grant permission. Should this thing be able
to work offline? Should this thing be able
to pop up notifications? In general,
the browser vendors have done a very good,
thoughtful job at making sure that they
seek user permission, when important,
for this stuff. So I think, in general,
there has been a lot of thoughtful work
about security and privacy for these APIs. Certainly, some of them are in the more experimental
stage still. I’ll be the first to say that. Like 3D graphics,
for example, is still in the early stage, somewhat experimental
at this point. It’s not built into
the browser itself. But the ones that are
built into the browser– I can only speak for myself– I would be comfortable deploying
a webapp that used them. Fette: And we should point out
that if you go out to the developer sandbox, there’s a number of people
that are using that. So Mozilla
has the Bespin project, which is using canvas. And, you know, there’s always
some chicken and egg scenario, but certainly a lot of these
do already have good implementation
across multiple browsers, and I think that if you are
doing something that is so new, so innovative,
and just looks so cool, I think if you say, you know,
you need to upgrade your browser and you need to get
something new, it’s not gonna work
for 100% of people, but that’s how
we move forward. Papakipos: Cool. Okay.
Let’s take a live question. man: So I got a quick
question for you regarding the canvas. There is no event
on individual graphic elements that are on your canvas. Is there any plans
at any point in time to be able to support that? Because SVG has it.
VML has it. And, I mean, that is just
imperative for us to be able to have that. Any plans at all? Papakipos:
That’s a great question. And that actually gets back to one of the issues
I was discussing, sort of the differences between
immediate-mode APIs and retain-mode APIs. The challenge
with immediate-mode APIs is that things like picking– which is what
you’re describing, right, how do I click something
and get a call-back– make more sense in the context
of a retain-mode API like SVG than they do in an
immediate-mode API like canvas. My prediction is canvas probably
won’t grow that ability, but some of the retain-mode
versions of it are probably easier ways
to do that. I guess I’m thinking of it
that specifically in the 3D realm. I haven’t heard specifically about any sort of picking
support for canvas in the works so far. I think you might find SVG is better for that sort of
thing, though. man: Right.
And you guess… all the presentations,
even this morning… you always seem to exclude IE
out of this whole equation. But unfortunately,
IE, IE6 is still around, and it’s really killing us, and is there any effort
on your side to kind of bring them
onto the table and try and say, hey,
you know what, you guys have to start
adopting the HTML5 stuff? Fette: It’s–
Papakipos: It’s a good question. Fette: I think we should
give some props to Microsoft. Like, they have started
implementing some of the HTML5 features
in IE8. Obviously, we’d love
to see them do more. It’s an open standard
and, you know… there are Microsoft people
in attendance. They have name badges
that say Microsoft. [laughter]
You should corner one of them and share your opinion. man: It’s like they come
to conferences and say, “Hey, we got CSS 2.1
completely implemented now.” You’re like, come on.
People are at 2.0. Papakipos: Yeah, I think… yeah, we can’t implement it
for them, but I have seen a lot of signs
that they’re– man: You guys at Google, you
should be able to implement it. Come on.
[laughter] Papakipos: Thanks. Let’s take
another Moderator question. So the question is,
“Chrome…Chrome for Mac?” That’s a short question. Let’s see.
Well, it’s open source. Go build it and run it.
[laughter] Fette: It runs Gmail now.
Papakipos: Right. I mean, that’s what I do.
Fette: It’s getting better. Papakipos: So but nothing
specific to announce about public plans
for an official build, but definitely,
there’s lots of code in a very workable,
usable state, and we want to make sure
it’s really polished before we do anything final
that would affect end-users. Cool. Okay, another live
question. man: This is actually
about the worker threads, and I was curious if there’s
any sort of, like, mutual exclusion to avoid,
like, race conditions. Fette: So…JavaScript
in general tries to avoid
the notion of… locking anything that would
require you to do locking. [man speaking indistinctly] Papakipos: Oops.
We lost a mic. man: But, uh…what I was
thinking about is, like, for file access when you have offline
programs and all that, and you start doing stuff that
is, like, on the file system or just not necessarily
a variable. Is there any sort of thought
on that, maybe? Papakipos: I mean,
the general style, the way it works, tends to
avoid that kind of stuff. I think you probably could
get yourself in a deadlock if you tried, but it tends to be somewhat
immune to that because of the way
the API works. There’s two factors
that make that the case. One is that all of
the receive calls are basically call-back based. So you tend to receive
a call back and do something. In a lot of deadlock cases, I’ve seen in conventional
operating systems, you get into deadlock
situations ’cause you’ve got reads going on
on both sides of a pipe. So this is a more call-back
oriented thing, so it’s somewhat immune to that. I think you probably
could make it deadlock if you really try. If you push a bunch of messages
to someone and the other side’s
not listening, eventually, it will back up. Fette: I would say that if you
come into one of these scenarios where you think
that you need locking or you think that there
might be a deadlock, these are definitely
new APIs. The best thing to do is get on
the WhatWG mailing list. They’re open.
Anyone can email to them. And just send an email saying
like, “Look, “I think this is a scenario
where, you know, locking would be really useful
or I might need locking,” and then start a discussion
around that. Because, you know, these still
are at the early stages, and discussion
is most welcome. Papakipos:
And it’s entirely possible we need to add something
we haven’t figure out yet, so let us know if you need it. Cool. Let’s take one more
Moderator question. “Is there any work
for client side web applications “to gain access to server side
persistence model using a standard spaced
protocol?” That’s a good question. Well, let’s see… Not that I’m aware of. Most of our thinking
about offline so far has been on the client side
in terms of how do we have the fundamental
storage capabilities, how do we do appcache, how do we have a way
to capture URLs and display something
off the local file system? So I think, for the most part, we’ve been thinking primarily
about the client side thus far. There certainly are
Google technologies like App Engine you could use to store things
on the server side. And you could build
a persistent storage mechanism that way. But I think it’s largely
up to the folks in this room. You guys are the developers. We sort of are providing
the low-level client mechanisms and the sever side services
like App Engine. You should be able to,
I think, build the kind of persistent
system you’re looking for on top of that. Yes. Live question. man: Okay, you talked about
APIs, the device function
on your location on the camera/microphone. And I wonder, W3C’s is current
to stop an activity to finding– intended to find a number
of a bunch of device APIs for web applications… and also security monitor. Which is Google’s view
on that? Will you support
that activity, and uh… be active in that group? Fette: So I don’t know
if we actually need a full W3C activity for this. I think a lot of these
are more straightforward, and we’ll just have to see,
as time goes on, sort of what gains traction
and what doesn’t. I don’t know at this
point in time that we would say we support
a W3C activity. man: So you don’t believe
it’s a good idea to standardize device APIs
or, uh… Fette: I don’t–
I think that as time goes on, we’ll try to figure out what
APIs make sense to standardize. I’m not making any statements
about whether a W3C activity is necessary
for that or not. I think that, you know,
certainly we’ve got a lot of browser vendors
involved. We’ve got a lot of interested
developers involved and that as time goes on,
we’ll see… we’ll see what direction
this goes in. Papakipos: In general, the approach we found works best
with standardization, and I guess I found in my career
even before Google works best, is to prototype something first
and then try to standardize it. So for many of these things–
we’re talking about today like peer-to-peer
and stuff like that– honestly, I don’t know
what I would propose to the standards organization
today, right? The first step for me is to start sketching out
some APIs, prototyping some things,
seeing what actually works, getting some people
to try it. So for many of these,
we’re not quite at the point of standardization yet. So far the model that the
standard orgs have been doing is to do individual standards
for these individual APIs. You’re describing a somewhat
more overarching thing. There hasn’t been any movement
in that direction thus far. It’s been more
individual standards for specific proposed APIs. Fette: So let’s take
another question. Thanks. man: Hi. So with HTML5,
and going forward, basically we’ll be using
more and more of desktop capabilities. So wouldn’t the browser
actually turn into a cross-platform runtime, which is capable of running
applications anyway? So are we actually
moving towards an application model only, or how is that
actually different? Apart from I do not need to probably install
and uninstall an application. What other differences
would I get? Papakipos: That is very much
how I think about it, right? We are, in a sense, making
this application runtime. That’s what a browser is,
right? A browser is–in effect, to the applications
running on it, a browser’s almost like
your operating system, right? You think when you’re developing
a web application and debugging it, you’re thinking more about
the browser you’re in than the OS you’re on. So I think we very much think
about it the way you describe. I think that’s a fair thought. The browser is the runtime
for applications. man: Okay, so the only
difference would be that the applications would be
cross-platform, right? Papakipos: Yeah. And I think
that’s one of the neat things that browsers bring
to the table is that if I know I run
in Safari, then I run in Safari
on Windows and Mac. And if I run in Firefox, I know I run on a variety
of operating systems, Linux and Windows
and Mac and others, right? So I agree. That’s one of the really
cool things about the web, and it’s one of the properties
we want to preserve as we add these new
capabilities. Great question. Cool, should we do
one more Moderator? Fette: I think we have time
for one more. So it’s a question on,
“There are many HTML5 features “that support accessibility,
e.g., deep linking “into applications
for screen readers. “Can you detail what work
Google is doing to extend the web
in those directions?” So I think a lot of
the work that we’re doing is trying to make
as much possible in HTML and open standards
as possible. So one great example for that
is the video tag. We’re trying to work on
support for closed captioning in video
in multiple formats. We’re looking at what do you
need to do subtitles? We’re looking at simple
subtitles like SubRip and– more complicated things
like ASS so that it is possible to do… to make better accessibility
for these new APIs. Papakipos: Cool.
Fette: Okay. Papakipos: Okay. Time for
one more live question. Go ahead. man: Hi. We can see that
HTML5 is really powerful. Everybody is happy with that. But based on here
that the browser must be very complicated,
it becomes a monster, it has DBMS
to support SQL. It has codecs
to support videos. And it will become
bigger and bigger. How can we get this browser
into our mobile devices? Papakipos: Okay, so, yeah, the question is how do we get
these to mobile devices? Well, the good news is
they’re on mobile devices. Many of the HTML5 APIs
I talked about today work on mobile browsers. And again, as with desktops, it varies depending
on which browser you’re on and which phone device
that you’re on. But a lot of the capabilities
I talked about like appcache and database
and geolocation work on the Android browser
today. Fette: And the iPhone browser. Papakipos:
And the iPhone browser. Yeah, we should give
a lot of props to iPhone. iPhone has been very early
in a lot of these HTML5 features for the browser for iPhone. So we’re seeing some very
encouraging work going on in that area. The good news is the memory
and flash capacity and whatnot on the phones
are going up, so they do seem
to be adding these things. There may be some APIs
that phones don’t add as quickly as desktop. I think a good example there
is 3D, right? 3D graphics is just starting
to take hold, I think, for laptops doing web browsing. Not quite ready for that
on the phone. It will come. But I think we’re a little bit
early on that still. So we will, in some cases, see the phone come
a little bit later. But the gap there is getting
shorter every year. It seems like phones
are starting to take up some of these features faster than I would have
predicted a couple years ago. Which is great to see. man: And as we can see
in the demo, we only support like
two, three codecs for video, and I believe in the future
and we all want to support almost all kinds of codecs. And for the CQ, we want to
be as good as the DBMS. So we have transactions,
we have the… Fette: I’m gonna jump in there
really quick with regard to video codecs. I don’t know if we actually want to make it that
complicated. One of the nice things
that we have right now is it’s something new
and we can define how do we make it simple
and work? Like if you’re a browser vendor,
you have to worry about 50 different image formats
right now. You have worry about GIF,
GIF98H, JPEG, JPEG2000, TGA, X–tons of codecs. And it’s just complicated. So I think what we want to do is we want to make it simple
if we can. So I think we’re running
out of time. Papakipos: Yep.
We have to wrap up. Thank you all for coming
and for the great questions. [applause]

52 Comments

Add a Comment

Your email address will not be published. Required fields are marked *