Google’s Improbable Deal to Recreate the Real World in VR | WIRED

GOOGLE’S IMPROBABLE DEAL TO RECREATE THE REAL WORLD IN VRISLAND CREATOR, FROM WORLDS ADRIFT, BY BOSSA STUDIOS.LET A THOUSAND virtual worlds rain down from the clouds. Or rather, the cloud. That’s the call from Google as it gets behind a tiny British startup called Improbable.

Founded by two Cambridge graduates and backed by $20 million in funding from the venture capitalists at Andreessen Horowitz, Improbable offers a new way of building virtual worlds, including not just immersive games à la Second Life or World of Warcraft, but also vast digital simulations of real cities, economies, and biological systems. The idea is that these virtual worlds can run in a holistic way across a practically infinite network of computers, so that they can expand to unprecedented sizes and reach new levels of complexity.

Fuente: Google’s Improbable Deal to Recreate the Real World in VR | WIRED

CUEC!!! SOLUCIONADO hace musho rato ;) – PhoneGap. Cómo acceder a servicios del teléfono desde la web.

PhoneGap is an open source project that does precisely what we need, with a few useful extras. It provides us with a JavaScript bridge API to underlying hardware, such as the camera, GPS, and accelerometer, as well as the file system. It also supports a number of different platforms such as Apple iOS, Google Android, RIM’s BlackBerry, Palm’s webOS, and soon, Windows Phone 7. That’s quite a list, covering a sizable portion of the smartphone market.

Vean la página 223 del libro que les dejo en este enlace.

SUPER FACIL! Alexa Voice Services

What is AVS?

AVS is Amazon’s intelligent cloud service that allows you as a developer to voice-enable any connected product with a microphone and speaker. Users can simply talk to their Alexa-enabled products to play music, answer questions, get news and local information, control smart home products and more.

Join the community of developers who are revolutionizing how consumers interface with technology through the power of voice, with the Alexa Voice Service Developer Preview.

AVS is coming to the UK and Germany in early 2017. Sign up to be notified when the service is available.

Get started »

View documentation »

Fuente: Alexa Voice Services

Microsoft lanza su estrategia AI. Microsoft artificial intelligence moves beyond Office 365

Artificial intelligence is poised to pervade every area of business to boost revenue and improve operations.

ATLANTA — With artificial intelligence built into virtually every portion of its software stack, Microsoft’s grand vision is to make workers far more productive. But many workers are still warming to the company’s cloud productivity suite, which requires new ways of collaborating and working.

Over the past three years, Microsoft has touted Office 365services as part of its cloud-first, mobile-first vision, enabling companies to work from anywhere easily and securely. It envisions customers using services like SharePoint Online, the cloud-based collaboration tool, to exchange documents without email attachments; OneDrive to centralize file storage; and Skype for Business to enable teleconferencing and video chat.

Office 365 helped Dusseldorf, Germany-based Henkel Corp., a 140-year-old manufacturer of products including Dial soap and adhesives, step into the future.

“The demand was so strong among business users for more collaboration,” said Markus Petrak, Henkel’s corporate director of IT digital workplace. “We had such an outdated workplace.”

Rather than upgrade its Office 2003 applications, the company migrated to Office 365.

“It was rocket science two years ago, and now the business is adapting,” he said.

At the Microsoft Ignite conference here this week, the vendor fleshed out its strategy for companies like Henkel to use the entire stack of services — from the Azure cloud to Office 365 integrated with Dynamics CRM to Cortana natural language digital assistant to new Microsoft artificial intelligence (AI) technologies — to become modern, agile and customer-focused.

Fuente: Microsoft artificial intelligence moves beyond Office 365

Build a chat bot in ten minutes with Watson – IBM Watson

We recently released the Watson Conversation service to make the difficult task of automatically understanding and responding to customers as simple as possible. The service provides a nifty GUI for training and configuring intents and dialog flow. As part of the launch, we’ve created some handy getting started resources, including a video, a demo, and a basic tutorial.

Fuente: Build a chat bot in ten minutes with Watson – IBM Watson

Pitney Bowes: Transforming Digital Commerce with APIs | Apigee

EL MOVIMIENTO DE GOOGLE PARA ENTRAR AL MUNDO DE LAS API’S PARA EMPRESAS. APIGEE.

Pitney Bowes: Transforming Digital Commerce with APIsRoger Pilc, Pitney Bowes’ chief innovation officer, discusses how the company is well on its way to digitizing legacy business and building new digital businesses by harnessing existing capabilities and leveraging modern new technologies, including APIs and Apigee’s API management platform.Read Pitney Bowes’ full case study here.

Fuente: Pitney Bowes: Transforming Digital Commerce with APIs | Apigee

Restful API Design: An Opinionated Guide

One developer’s opinion on what constitutes good API design. This journey touches on URL formats to error handling to verbs in URLs.

This is very much an opinionated rant about APIs, so it’s fine if you have a different opinion. These are just my opinions. Most of the examples I talk through are from the Stack Exchange or GitHub API — this is mostly just because I consider them to be well-designed APIs that are well-documented, have non-authenticated public endpoints, and should be familiar domains to a lot of developers.

URL Formats

Resources

OK, let’s get straight to one of the key aspects. Your API is a collection of URLs that represent resources in your system that you want to expose. The API should expose these as simply as possible — to the point that if someone was just reading the top level URLs, they would get a good idea of the primary resources that exist in your data model (e.g. any object that you consider a first-class entity in itself). The Stack Exchange API is a great example of this. If you read through the top level URLs exposed, you will probably find they match the kind of domain model you would have guessed:

  • /users
  • /questions
  • /answers
  • /tags
  • /comments

And while there is no expectation that there will be anyone attempting to guess your URLs, I would say these are pretty obvious. What’s more, if I was a client using the API, I could probably have a fair shot and understanding these URLs without any further documentation of any kind.

Identifying Resources

To select a specific resource based on a unique identifier (an ID, a username, etc.) then the identifier should be part of the URL. Here we are not attempting to search or query for something, rather we are attempting to access a specific resource that we believe should exist. For example, if I were to attempt to access the GitHub API for my username, https://api.github.com/users/robhinds, I am expecting the concrete resource to exist.

The pattern is as follows (elements in square braces are optional):

/RESOURCE/[RESOURCE IDENTIFIER]

Where including an identifier will return just the identified resource, assuming one exists, otherwise returning a 404 Not Found (so this differs from filtering or searching where we might return a 200 OK and an empty list) — although this can be flexible. If you prefer to return an empty list also for identified resources that don’t exist, this is also a reasonable approach, once again, as long as it is consistent across the API (the reason I go for a 404 if the ID is not found is that normally, if our system is making a request with an ID, it believes that the ID is valid, and if it isn’t, then it’s an unexpected exception, compared to if our system was querying filtering user by sign-up dates then its perfectly reasonable to expect the scenario where no user is found).

Subresources

A lot of the time our data model will have natural hierarchies — for example, StackOverflow Questions might have several child Answers, etc. These nested hierarchies should be reflected in the URL hierarchy. If we look at the Stack Exchange API for the previous example:

/questions/{ids}/answers

Again, the URL is (hopefully) clear without further documentation what the resource is: At a glance, it’s clear that the URL is all answers that belong to the identified questions.

This approach naturally allows as many levels of nesting as necessary using the same approach, but as many resources are top-level entities as well, this prevents you from needing to go much further than the second level. To illustrate, let’s consider we wanted to extend the query from all answers to a given question to instead query all comments for an identified answer — we could naturally extend the previous URL pattern as follows

/questions/{ids}/answers/{ids}/comments

But as you have probably recognized, we have /answers as a top-level URL, so the additional prefixing of /questions/{ids} is surplus to our identification of the resource (and actually, supporting the unnecessary nesting would also mean additional code and validation to ensure that the identified answers are actually children of the identified questions).

There is one scenario where you may need this additional nesting, and that is when a child resource’s identifier is only unique in the context of its parent. A good example of this is GitHub’s user and repository pairing. My GitHub username is a global, unique identifier, but the name of my repositories are only unique to me (someone else could have a repository the same name as one of mine — as is frequently the case when a repository is forked by someone). There are two good options for representing these resources:

  1. The nested approach described above. So for the GitHub example, the URL would look like:
    /users/{username}/repos/{reponame}. I like this, as it’s consistent with the recursive pattern defined previously, and it is clear what each of the variable identifiers is relating to.
  2. Another viable option, the approach that GitHub actually uses is as follows:
    /repos/{username}/{reponame}. This changes the repeating pattern of {RESOURCE}/{IDENTIFIER} (unless you just consider the two URL sections as the combined identifier). However, the advantage is that the top-level entity is what you are actually fetching — in other words, the URL is serving a repository, so that is the top level entity.

Both are reasonable options and really come down to preference. As long as it’s consistent across your API, then either is OK.

Filtering and Additional Parameters

Hopefully, the above is fairly clear and provides a high-level pattern for defining resource URLs. Sometimes, we want to go beyond this and filter our resources — for example, we might want to filter StackOverflow questions by a given tag. As hinted at earlier, we are not sure of any resources existence here, we are simply filtering — so unlike with an incorrect identifier, we don’t want to 404 Not Found the response, rather return an empty list.

Filtering controls should be entered as part of the URL query parameters (e.g. after the first ? in the URL). Parameter names should be specific and understandable and lower case. For example:

/questions?tagged=java&site=stackoverflow

All the parameters are clear and make it easy for the client to understand what is going on (also worth noting that https://api.stackexchange.com/2.2/questions?tagged=awesomeness&site=stackoverflow, for example, returns an empty list, not a 404 Not Found). You should also keep your parameter names consistent across the API. If you support common functions such as sorting or paging on multiple endpoints, make sure the parameter names are the same.

Verbs

As should be obvious in the previous sections, we don’t want verbs in our URLs, so you shouldn’t have URLs like /getUsers or /users/list, etc. The reason for this is the URL defines a resource, not an action. Instead, we use the HTTP methods to describe the action: GET, POST, PUT, HEAD, DELETE, etc.

Versioning

Like many of the RESTful topics, this is hotly debated and pretty divisive. Very broadly speaking, the two approaches to define API versioning are:

  • Part of the URL.
  • Not part of the URL.

Including the version in the URL will largely make it easier for developers to map their endpoints to versions, etc., but for clients consuming the API, it can make it harder (often they will have to go and find-and-replace API URLs to upgrade to a new version). It can also make HTTP caching harder — if a client POSTs to /v2/users, then the underlying data will change, so the cache for GET-ting users from /v2/users is now invalid. However, the API versioning doesn’t affect the underlying data, so that same POST has also invalidated the cache for /v1/users etc. The Stack Exchange API uses this approach (as of writing, their API is based at https://api.stackexchange.com/2.2/).

If you choose to not include the version in your API, then two possible approaches are HTTP request headers or using content-negotiation. This can be trickier for the API developers (depending on framework support, etc.) and can also have the side effect of clients being upgraded without knowing it (e.g. if they don’t realize they can specify the version in the header, they will default to the latest).  The GitHub API uses this approach https://developer.github.com/v3/media/#request-specific-version.

I think this sums it up quite nicely:

Response Format

JSON is the RESTful standard response format. If required, you can also provide other formats (XML, YAML, etc.), which would normally be managed using content negotiation.

I always aim to return a consistent response message structure across an API. This is for ease of consumption and understanding across calling clients.

Normally, when I build an API, my standard response structure looks something like this:

[ code: "200", response: [ /** some response data **/ ] ]

This does mean that any client always needs to navigate down one layer to access the payload, but I prefer the consistency this provides, and it also leaves room for other metadata to be provided at the top level (for example, if you have rate limiting and want to provide information regarding remaining requests, etc., this is not part of the payload but can consistently sit at the top level without polluting the resource data).

This consistent approach also applies to error messages — the code (mapping to HTTP status codes) reflects the error and the response, in this case, is the error message returned.

Error Handling

Make use of the HTTP status codes appropriately for errors. 2XX status codes for successful requests, 3XX status codes for redirecting, 4xx codes for client errors, and 5xx codes are for server errors (you should avoid ever intentionally returning a 500 error code — these should be used for when unexpected things go wrong within your application).

I combine the status code with the consistent JSON format described above.

Fuente: Restful API Design: An Opinionated Guide – DZone Integration