Hybrid Apps and the Future of Mobile Computing – DZone Mobile

Fuente: Hybrid Apps and the Future of Mobile Computing – DZone Mobile

Hybrid Apps and the Future of Mobile Computing

Learn how hybrid app development is pulling ahead of native in the constantly changing and fluctuating mobile ecosystem.

· Mobile Zone

Launching an app doesn’t need to be daunting. Whether you’re just getting started or need a refresher on mobile app testing best practices, this guide is your resource! Brought to you in partnership with Perfecto.

The jury is out on which type of mobile app is the future of mobile computing. The stats point to native being the dominant app type in use. Most of the top 100 apps in the app stores are native, and comScore reports that 50% of all time spent digitally is spent on mobile apps (though it doesn’t give a split between native and hybrid), and just 7% is spent on mobile web apps.

Digital Time Spent in July 2016

Source: comScore

While native apps are great for engagement, mobile websites still draw a majority of traffic. Applause accurately says, “The Web Gets Eyeballs, Apps Keep Them.”

Top 1k Mobile Apps v Top 1k Mobile Web Properties

Source: comScore

Clearly, relying on native mobile apps is not enough. For this reason, Bloomberg and many large mobile app publishers use both web and native mobile apps, not wanting to miss out on either. However, this is far from efficient. What we need is to build and ship mobile apps much like web apps—Deploy once, and it works across all platforms. Fortunately, there are exciting developments on this front.

There are two strong currents in mobile app development bound to intersect in the near future. On one side, there are many app development frameworks that help you build hybrid apps with native-like functionality. These include frameworks like React nativeCordovaNativeScript, and Ionic. They promise the best of both worlds- use HTML, JavaScript and CSS to build mobile apps, and let those apps access native device functionality.

On the other side, the two major mobile platforms, iOS and Android, are taking steps to make mobile web apps function like native apps, allowing web apps to place their icons on the homescreen or app drawer, send notifications, and even leverage device functionality. Google’s Progressive Web Apps are the most recent development in this regard, and there are already numerous examples of apps that have gone progressive.

As these two trends converge in the future, they will lead to a shift from native mobile apps to hybrid apps. There are a couple of reasons why hybrid apps are set to trump native apps in the near future:

App Store Limitations

Today, releasing a native mobile app involves packaging the code, submitting it to the app store, and waiting for it to be approved. The entire process can take anywhere from two to seven days. This is an eternity in the mobile world. Mobile app developers (especially those that already practice DevOps for their web apps) want to be able to update their mobile apps like their web apps, multiple times a day if necessary. This is not possible with the limitations of app stores, and hybrid apps are the way out.

Code Reuse

As most apps have an iOS and an Android version, they are developed using each platform’s preferred programming language- Objective-C or Swift for iOS, and Java for Android. Hybrid apps, on the other hand, let you build mobile apps with the same languages your developers are already familiar with- HTML, JavaScript, and CSS. You can write code once, and deploy it across all your mobile platforms. Mobile app testing equally benefits because you don’t need to write unique test scripts for each app type. Testing against a single codebase also reduces the testing infrastructure you need and simplifies the QA process. With the increasing fragmentation in device types and OS versions, this is becoming a necessity for mobile development.

The Rising Talent Gap

Code.org estimates there will be 1.4 million computing jobs available by 2020, and only 400,000 computer science students. This is also true for mobile development. Truly great iOS and Android developers are a rare find. It’s a better strategy to make the best use of the existing talent you have than to leave your mobile development at the mercy of scarcely available new talent.

Faster Time to Market

The popularity of mobile apps rises and falls faster than their web counterparts. Ratings, reviews, installs, daily active users and churn rate all add up to decide the fate of a mobile app. In this fast-paced world, hybrid moves you faster from idea to app than native.

DevOps for Mobile

Finally, hybrid apps let you extend DevOps to your mobile apps, too. They let you go from mammoth quarterly app updates to a bi-weekly cycle, and eventually let you update as frequently as your web app- which is close to impossible with native apps today. To update at this frequency, you’ll need to automate the two key parts of your continuous integration (CI) process: builds and tests. This is where tools like Git, Jenkins, and Appium have a key role to play. When well-integrated, they can let you focus exclusively on developing and testing your app, rather than worrying about mobile platforms’ norms. This gives you the confidence to release multiple times a day, and take ownership of your mobile development process.

This post will sound too one-sided if I ignore the fact that as of today, native apps deliver a much better and faster UI than hybrid apps. This is the single biggest reason they’re so popular with developers and users alike. However, hybrid apps are fast approaching native-like functionality. All of the reasons above add up to show why native apps, though the de facto choice for many today, can’t hold that position for too long.

The mobile ecosystem changes faster than we’d like to believe. And it won’t be long before we look back at how primitive our mobile app development was in the era of app stores and their policing of native apps. Hybrid apps are the future of mobile computing.

Keep up with the latest DevTest Jargon with the latest Mobile DevTest Dictionary. Brought to you in partnership with Perfecto.

Pitney Bowes: Transforming Digital Commerce with APIs | Apigee

EL MOVIMIENTO DE GOOGLE PARA ENTRAR AL MUNDO DE LAS API’S PARA EMPRESAS. APIGEE.

Pitney Bowes: Transforming Digital Commerce with APIsRoger Pilc, Pitney Bowes’ chief innovation officer, discusses how the company is well on its way to digitizing legacy business and building new digital businesses by harnessing existing capabilities and leveraging modern new technologies, including APIs and Apigee’s API management platform.Read Pitney Bowes’ full case study here.

Fuente: Pitney Bowes: Transforming Digital Commerce with APIs | Apigee

Restful API Design: An Opinionated Guide

One developer’s opinion on what constitutes good API design. This journey touches on URL formats to error handling to verbs in URLs.

This is very much an opinionated rant about APIs, so it’s fine if you have a different opinion. These are just my opinions. Most of the examples I talk through are from the Stack Exchange or GitHub API — this is mostly just because I consider them to be well-designed APIs that are well-documented, have non-authenticated public endpoints, and should be familiar domains to a lot of developers.

URL Formats

Resources

OK, let’s get straight to one of the key aspects. Your API is a collection of URLs that represent resources in your system that you want to expose. The API should expose these as simply as possible — to the point that if someone was just reading the top level URLs, they would get a good idea of the primary resources that exist in your data model (e.g. any object that you consider a first-class entity in itself). The Stack Exchange API is a great example of this. If you read through the top level URLs exposed, you will probably find they match the kind of domain model you would have guessed:

  • /users
  • /questions
  • /answers
  • /tags
  • /comments

And while there is no expectation that there will be anyone attempting to guess your URLs, I would say these are pretty obvious. What’s more, if I was a client using the API, I could probably have a fair shot and understanding these URLs without any further documentation of any kind.

Identifying Resources

To select a specific resource based on a unique identifier (an ID, a username, etc.) then the identifier should be part of the URL. Here we are not attempting to search or query for something, rather we are attempting to access a specific resource that we believe should exist. For example, if I were to attempt to access the GitHub API for my username, https://api.github.com/users/robhinds, I am expecting the concrete resource to exist.

The pattern is as follows (elements in square braces are optional):

/RESOURCE/[RESOURCE IDENTIFIER]

Where including an identifier will return just the identified resource, assuming one exists, otherwise returning a 404 Not Found (so this differs from filtering or searching where we might return a 200 OK and an empty list) — although this can be flexible. If you prefer to return an empty list also for identified resources that don’t exist, this is also a reasonable approach, once again, as long as it is consistent across the API (the reason I go for a 404 if the ID is not found is that normally, if our system is making a request with an ID, it believes that the ID is valid, and if it isn’t, then it’s an unexpected exception, compared to if our system was querying filtering user by sign-up dates then its perfectly reasonable to expect the scenario where no user is found).

Subresources

A lot of the time our data model will have natural hierarchies — for example, StackOverflow Questions might have several child Answers, etc. These nested hierarchies should be reflected in the URL hierarchy. If we look at the Stack Exchange API for the previous example:

/questions/{ids}/answers

Again, the URL is (hopefully) clear without further documentation what the resource is: At a glance, it’s clear that the URL is all answers that belong to the identified questions.

This approach naturally allows as many levels of nesting as necessary using the same approach, but as many resources are top-level entities as well, this prevents you from needing to go much further than the second level. To illustrate, let’s consider we wanted to extend the query from all answers to a given question to instead query all comments for an identified answer — we could naturally extend the previous URL pattern as follows

/questions/{ids}/answers/{ids}/comments

But as you have probably recognized, we have /answers as a top-level URL, so the additional prefixing of /questions/{ids} is surplus to our identification of the resource (and actually, supporting the unnecessary nesting would also mean additional code and validation to ensure that the identified answers are actually children of the identified questions).

There is one scenario where you may need this additional nesting, and that is when a child resource’s identifier is only unique in the context of its parent. A good example of this is GitHub’s user and repository pairing. My GitHub username is a global, unique identifier, but the name of my repositories are only unique to me (someone else could have a repository the same name as one of mine — as is frequently the case when a repository is forked by someone). There are two good options for representing these resources:

  1. The nested approach described above. So for the GitHub example, the URL would look like:
    /users/{username}/repos/{reponame}. I like this, as it’s consistent with the recursive pattern defined previously, and it is clear what each of the variable identifiers is relating to.
  2. Another viable option, the approach that GitHub actually uses is as follows:
    /repos/{username}/{reponame}. This changes the repeating pattern of {RESOURCE}/{IDENTIFIER} (unless you just consider the two URL sections as the combined identifier). However, the advantage is that the top-level entity is what you are actually fetching — in other words, the URL is serving a repository, so that is the top level entity.

Both are reasonable options and really come down to preference. As long as it’s consistent across your API, then either is OK.

Filtering and Additional Parameters

Hopefully, the above is fairly clear and provides a high-level pattern for defining resource URLs. Sometimes, we want to go beyond this and filter our resources — for example, we might want to filter StackOverflow questions by a given tag. As hinted at earlier, we are not sure of any resources existence here, we are simply filtering — so unlike with an incorrect identifier, we don’t want to 404 Not Found the response, rather return an empty list.

Filtering controls should be entered as part of the URL query parameters (e.g. after the first ? in the URL). Parameter names should be specific and understandable and lower case. For example:

/questions?tagged=java&site=stackoverflow

All the parameters are clear and make it easy for the client to understand what is going on (also worth noting that https://api.stackexchange.com/2.2/questions?tagged=awesomeness&site=stackoverflow, for example, returns an empty list, not a 404 Not Found). You should also keep your parameter names consistent across the API. If you support common functions such as sorting or paging on multiple endpoints, make sure the parameter names are the same.

Verbs

As should be obvious in the previous sections, we don’t want verbs in our URLs, so you shouldn’t have URLs like /getUsers or /users/list, etc. The reason for this is the URL defines a resource, not an action. Instead, we use the HTTP methods to describe the action: GET, POST, PUT, HEAD, DELETE, etc.

Versioning

Like many of the RESTful topics, this is hotly debated and pretty divisive. Very broadly speaking, the two approaches to define API versioning are:

  • Part of the URL.
  • Not part of the URL.

Including the version in the URL will largely make it easier for developers to map their endpoints to versions, etc., but for clients consuming the API, it can make it harder (often they will have to go and find-and-replace API URLs to upgrade to a new version). It can also make HTTP caching harder — if a client POSTs to /v2/users, then the underlying data will change, so the cache for GET-ting users from /v2/users is now invalid. However, the API versioning doesn’t affect the underlying data, so that same POST has also invalidated the cache for /v1/users etc. The Stack Exchange API uses this approach (as of writing, their API is based at https://api.stackexchange.com/2.2/).

If you choose to not include the version in your API, then two possible approaches are HTTP request headers or using content-negotiation. This can be trickier for the API developers (depending on framework support, etc.) and can also have the side effect of clients being upgraded without knowing it (e.g. if they don’t realize they can specify the version in the header, they will default to the latest).  The GitHub API uses this approach https://developer.github.com/v3/media/#request-specific-version.

I think this sums it up quite nicely:

Response Format

JSON is the RESTful standard response format. If required, you can also provide other formats (XML, YAML, etc.), which would normally be managed using content negotiation.

I always aim to return a consistent response message structure across an API. This is for ease of consumption and understanding across calling clients.

Normally, when I build an API, my standard response structure looks something like this:

[ code: "200", response: [ /** some response data **/ ] ]

This does mean that any client always needs to navigate down one layer to access the payload, but I prefer the consistency this provides, and it also leaves room for other metadata to be provided at the top level (for example, if you have rate limiting and want to provide information regarding remaining requests, etc., this is not part of the payload but can consistently sit at the top level without polluting the resource data).

This consistent approach also applies to error messages — the code (mapping to HTTP status codes) reflects the error and the response, in this case, is the error message returned.

Error Handling

Make use of the HTTP status codes appropriately for errors. 2XX status codes for successful requests, 3XX status codes for redirecting, 4xx codes for client errors, and 5xx codes are for server errors (you should avoid ever intentionally returning a 500 error code — these should be used for when unexpected things go wrong within your application).

I combine the status code with the consistent JSON format described above.

Fuente: Restful API Design: An Opinionated Guide – DZone Integration

Aprende despacio 

Dicen que en nuestra profesión es necesario estar continuamente aprendiendo. La verdad es que no conozco en profundidad muchas más profesiones, pero supongo que esto tampoco es algo tan extraordinario, y que en cierto modo un médico, un economista, un carpintero o un profesor también tiene que adquirir nuevos conocimientos a lo largo de su carrera profesional para mantenerse actualizados.

En lo que sí nos diferenciamos de otro tipo de profesiones es que somos una profesión muy joven, todavía tenemos muy poca historia por detrás y estamos continuamente replanteándonos la mejor forma de hacer las cosas.

Además el desarrollo de software engloba muchas actividades de la índole más diversa, desde la teoría de la computación hasta la configuración de sistemas interconectados a nivel global, desde la gestión de grandes (o pequeños) volúmenes de datos hasta la creación de interfaces de usuario cada vez más amigables y potentes que nos ayuden a realizar todo tipo de tareas.

Si a esto le unimos que muchas veces lo único que hace falta para crear cosas nuevas es un ordenador y una conexión a internet, es normal que cada día surjan mil nuevas tecnologías, procedimientos y herramientas que, aparentemente, serán The Next Big Thing™.

Esto nos puede llevar a una sensación de ansiedad permanente pensando en todas las cosas que debemos aprender (o al menos eso pensamos) para seguir siendo profesionales cualificados y no quedarnos desactualizados. Es lo que algunos llamandeveloparalysis: sentir que la industria del software avanza tan rápido que es imposible mantenerse al día.

Quien más y quien menos, todos tenemos una lista de cosas a las que nos gustaría poder dedicarle un tiempo para jugar y experimentar con ellas, y para la mayoría de nosotros esa lista crece mucho más rápido de lo que podemos asumir. Os voy a decir un secreto: no importa.

La mayoría de las tecnologías que hoy nos preocupa no tener tiempo para aprender, no tendrán ninguna relevancia dentro de un par de años.

Llevo el tiempo suficiente en esto para haber vivido muchos The Next Big Thing™ que se han quedado en nada, y a menos que seas futurólogo, acertar qué cosas son las que finalmente triunfarán y perdurarán, es tan complicado que no merece la pena preocuparse por ello.

Con todo esto no quiero decir que no sea importante preocuparse por aprender nuevas cosas, al contrario, siempre he pensado que es fundamental aumentar continuamente nuestros conocimientos, y no sólo en el ámbito profesional, sino en todas las áreas de la vida. Lo que realmente quiero decir es que aprender nuevas cosas no es lo mismo que aprender cosas nuevas.

Conocer las 5000 nuevas APIs que introduce Android 5.0 o las maravillas de ASP.NET vNext está bien, pero en realidad nada de esto es tan importante. Los principios de la programación en Android se basan en ideas de hace más de 30 años (programación orientada a objetos, patrones de diseño) y la propuesta de ASP.NET vNext, por muy novedosa que resulte en el mundo Microsoft, es lo que llevan haciendo otras plataformas desde hace un par de décadas (Ruby, Python). No pretendo quitarle valor a esos dos ejemplos, seguro que aportan nuevos e interesantes matices, pero no tienen la importancia que a veces les queremos dar.

Cuando nos centramos en aprender todas las novedades que surgen a nuestro alrededor, entramos en una dinámica de conocimiento superficial, porque no tenemos tiempo suficiente para profundizar en nada; necesitamos pasar a la siguiente cosa para no quedarnos atrás. Eso hace que pensemos saber sobre sistemas distribuidos por haber deplegado un par de máquinas en Azure o que nos consideremos unos hábiles diseñadores de aplicaciones web por haber seguido un tutorial de AngularJS.

Aprender requiere tiempo. Por una parte, necesitamos tiempo para adquirir los conceptos relacionados con una tecnología/metodología concreta, experimentar con ella y asentar los conocimientos, pero por otra también necesitamos tiempo para ver la evolución de esa tecnología en el mundo real, cómo se comporta en producción y cómo afectan las decisiones que tomamos inicialmente al futuro del proyecto.

Por muy buenos que seamos y por mucho que planifiquemos, siempre habrá circunstancias cambiantes a lo largo de la vida de un proyecto, y enfrentarse a esos problemas será lo que nos haga conocer realmente esa tecnología o metodología genial que hemos utilizado. La experiencia es fundamental y eso no es algo que se pueda adquirir de un día para otro.

Al final, muchas de estas cosas tan novedosas y revolucionarias que surgen cada día no son ni tan novedosas ni tan revolucionarias, y se acaban basando en conceptos similares, por lo que una vez que adquieres un conocimiento profundo de esos conceptos, trasladarlos de una tecnología a otra no es tan complicado. Si una tecnología es realmente importante, no te preocupes, esa tecnología perdurará y tendrás tiempo de sobra para aprenderla.

Como decía ese gran filósofo de nuestro tiempo que fue Jorge Berrocal de Gran Hermano I: “en fin serafín, corre más el galgo que el mastín, pero si el camino es largo, más corre el mastín que el galgo”.

 

Fuente: Aprende despacio | Koalite