Google has released a new tool, named Google Web Designer, for creating interactive HTML5 sites and ads. It has a modern WYSIWYG where you don’t need to dive into the code but use the features to get the output and the tools are mostly design-oriented. However, the code generated is always there and it can be edited or tweaked and the result is displayed automatically.
The tool comes with ready-to-use settings for designing Google-powered ads (DocubleClick, AdMob) that will work on any device. There are built-in components like 360 or carousel galleries, YouTube video embedding and more. A timeline exists for animations and anything can be drawn with a pen tool (not limited to shapes).
Google Web Designer is currently in beta status and it is available for both Windows + Mac.
HTML5-supported offline access handles basically the same functionality as Google’s Gears browser plugin and that’s a problem. Problems are best handled with quick and simple fixes and it looks like Google has opted for just that by “letting go” of Gears.
Of course now we just need to wait for the interim for the HTML5 spec to get out and running.
A while back I wrote something on doing Monte Carlo simulations with Web Services and SharePoint. Halfway through I mentioned that Google Pagerank was defined by a Markov chain which in turn was an output of a process called Markov chain Monte Carlo methods. Not that it concerned me but only one person mentioned this, and at that it was a vague mentioning. Huh…
This actually is a big deal. In fact a very big deal. A multi billion dollar deal in fact, as in the case of Google PageRank. Distributed computing has the power to help us solve many things if applied correctly. The “cloud” does not. (A topic for later.) Probably the greatest hurdle in getting people back on track is that this technology has use beyond the scope of most peoples daily lives. For example…
A paper was published in PLoS last week, September 4th 2009, called “Can an Eigenvector Measure Species’ Importance for Coextinctions?” In it the authors state that “PageRank” can be applied to the study of food webs. Food webs are the complex networks of who eats whom in an ecosystem.Typically we’re at the top, unless Hollywood or very bad planning is involved. Essentially, the scientists are saying that their particular version of PageRank could be a simple way of working out which extinctions would lead to ecosystem collapse. A relatively handy thing to have these days… As every species is embedded in a complex network of relationships with others, even a single extinction can rapidly cascade into the loss of seemingly unrelated species. Investigating when this might happen using more conventional methods is complicated as even in simple ecosystems, the number of combinations exceeds the number of atoms in the universe… E.g. a typical lottery which has 8 numbers that can range between 1 and 50 has 39,062,500,000,000 different combinations…
The researchers had to tweak PageRank to it to adapt it for their ecology focused purposes.
“First of all we had to reverse the definition of the algorithm.” “In PageRank, a web page is important if important pages point to it. In our approach a species is important if it points to important species.”
They also tested against algorithms that were already in use in computational biology to find a solution to the same problem. PageRank, in its adjusted form, gave them exactly the same solution as these much more complicated algorithms.
With the right design SharePoint can be an extremely useful, and totally appropriate, interface for accessing and disseminating the inputs and outputs of such an effort. It can store and present this data with all of the requisite benefits one would expect from a collaborative platform. Certainly there’s a world of work involved in doing something like this but the key point is that the right tool for the right job mantra works here. “All” you need is:
- Visual Studio
Google released a developers preview of their new search tool, Caffeine, which they claim will improve Google search’s
- Size & Comprehensiveness
The developers version is pre-beta, which really means absolutely nothing when in context with Google, and is fully functional so I took it through a few hoops.
- Index size
I searched for “SharePoint” and got the following results:
- Results 1: 1 – 10 of about 21,100,000 for SharePoint. (0.20 seconds)
- Results 2: 1 – 10 of about 21,100,000 for SharePoint. (0.10 seconds)
- Results 3: 1 – 10 of about 21,100,000 for SharePoint. (0.12 seconds)
- Results 1: 1 – 10 of about 17,200,000 for SharePoint. (0.14 seconds)
- Results 2: 1 – 10 of about 17,200,000 for SharePoint. (0.09 seconds)
- Results 3: 1 – 10 of about 17,200,000 for SharePoint. (0.12 seconds)
Looks like keywords, and their strings, relevancy just increased. As did the index which may explain the almost consist lag in speed, though that could be a resource issue too. All told, possibly just a useless exercise as it is after all, still in “Beta”… However, I do look forward to the imminent deluge of posts that will compare Caffeine to Bing and invariably delve into fanboyism.