Archive for the ‘Services’ Category

A Beginner’s Guide to Progressive Web Apps

Thursday, January 23rd, 2020

A Progressive Web App, also known as PWA, is a web app that “uses modern web capabilities to deliver an app-like experience.” It combines the best of both web apps and mobile apps. While it may be developed using web technologies, a PWA developed by a good app design agency feels and acts like a mobile app.

For instance, if you browse certain websites on your mobile browser, you might have come across a pop-up banner asking you if you want to add the website to your home screen. If you choose “Add to Home Screen”, then the “app” installs by itself in the background without you ever having to go to the app store in order to download it to your phone. After the installation is done, you can access the same content on the same platform in a similar experience, but this time through your phone without requiring a browser.

This is what a Progressive Web App is. It allows you to download a mobile app from a web application by combining the best of both worlds, and even works offline, similar to a native mobile app. This means that you can browse the content even if you do not have internet access.

What is the function of Progressive Web Apps?

Native mobile apps found in app stores are able to carry put certain functions like working offline, loading on home screen and sending push notifications. Apart from this, another key difference from web applications is that native mobile apps have that distinct experience of looking and feeling like an app.

However, browser mobile web apps, which can be accessed using a browser in mobile phones, do not have the qualities mentioned above. This is where Progressive Web Apps come in. When these apps are developed by an UI UX design company, a set of best practices are used to make web applications work and feel like native mobile apps from app stores.

User experiences are delivered through progressive enhancement by top UX design firms, which means that regardless of which system you use, you will still be able to access content easily and smoothly. Even if there are some features that may not be available, user experience is not compromised with a PWA. It allows the app to perform and function exactly the way it should.

So, in other words, the goal of a Progressive Web App is to be able to deliver an experience that is so smooth, seamless, and uniform that users cannot differentiate between a PWA and a native mobile app.

Connecting GraphQL using Apollo Server

Thursday, January 23rd, 2020

Introduction

Apollo Server is a library that helps you connect a GraphQL schema to an HTTP server in Node.js. We will try to explain this through an example, the link used to clone this project is mentioned below :-

git clone https://prwl@bitbucket.org/prwl/apollo-tutorial.git

This technology and its concepts can be best explained as below.

Challenge

Here, one of the main goals is to create a directory and install packages. This will eventually lead us to implement our first subscription in GraphQL with Apollo Server and PubSub.

Solution

For this, the first step includes building a new folder in your working directory. The current directory is changed to that new folder, and a new folder is created to hold your server code in and run. This will create the package.json file for us. After this, we install a few libraries. After the installment of these packages, the next step is to create an index.js file in the root of the server.

Create Directory

npm init -y

Install Packages

npm install apollo-server-express express graphql nodemon apollo-server

Connecting Apollo Server

Index.js first connects to the Apollo server. Every library is set to get started with the source code in the index.js file. To achieve this, you have first to import the necessary parts for getting started with Apollo Server in Express. Using Apollo Server’s applyMiddleware() method, you can opt-in any middleware, which in this case is Express.

import express from 'express';
import { ApolloServer, gql } from 'apollo-server-express';

const typeDefs = gql`
type Query {
hello: String
};
const resolvers = {
Query: {
hello: () => 'Hello World!'
}
}
`;
const server = new ApolloServer({ typeDefs, resolvers });
const app = express();
server.applyMiddleware({ app });

app.listen({ port: 4000 }, () =>
console.log(`? Server ready at http://localhost:4000${server.graphqlPath}`)
);

The GraphQL schema provided to the Apollo Server is the only available data for reading and writing data via GraphQL. It can happen from any client who consumes the GraphQL API. The schema consists of type definitions, which starts with a mandatory top-level Query type for reading data, followed by fields and nested fields. Apollo Server has various scalar types in the GraphQL specification for defining strings (String), booleans (Boolean), integers (Int), and more.

const typeDefs = gql`
type Query {
hello: Message
}Type Message {salutation: String}
`;
const resolvers = {
Query: {
hello: () => 'Hello World!'
}
};

In the GraphQL schema for setting up an Apollo Server, resolvers are used to return data for fields from the schema. The data source doesn’t matter, because the data can be hardcoded, can come from a database, or from another (RESTful) API endpoint.

Mutations

So far, we have only defined queries in our GraphQL schema. Apart from the Query type, there are also Mutation and Subscription types. There, you can group all your GraphQL operations for writing data instead of reading it.

const typeDefs = gql`
type Query {

}type Mutation {createMessage(text: String!): String!}
`;

As visible from the above code snippet. In this case, the createMessage mutation accepts a non-nullable text input as an argument, and returns the created message as a string.

Again, you have to implement the resolver as counterpart for the mutation the same as with the previous queries, which happens in the mutation part of the resolver map:

const resolvers = {
Query: {
hello: () => ‘Hello World!’
},
Mutation: {
createMessage: (parent, args) => {
const message = args.text;
return message;
},
},
};

The mutation’s resolver has access to the text in its second argument. The parent argument isn’t used.

So far, the mutation creates a message string and returns it to the API. However, most mutations have side-effects, because they are writing data to your data source or performing another action. Most often, it will be a write operation to your database, but in this case, we are just returning the text passed to us as an argument.

That’s it for the first mutation. You can try it right now in GraphQL Playground:

mutation {
createMessage (text: “Hello GraphQL!”)
}

The result for the query should look like this as per your defined sample data:

{
“data”: {
“createMessage”: “Hello GraphQL!”
}
}

Subscriptions

So far, you used GraphQL to read and write data with queries and mutations. These are the two essential GraphQL operations to get a GraphQL server ready for CRUD operations. Next, you will learn about GraphQL Subscriptions for real-time communication between GraphQL client and server.

Apollo Server Subscription Setup

Because we are using Express as middleware, expose the subscriptions with an advanced HTTP server setup in the index.js file:

import http from ‘http’;…server.applyMiddleware({ app, path: ‘/graphql’ });const httpServer = http.createServer(app);
server.installSubscriptionHandlers(httpServer);httpServer.listen({ port: 8000 }, () => {
console.log(‘Apollo Server on http://localhost:8000/graphql');
});…

To complete the subscription setup, you’ll need to use one of the available PubSub engines for publishing and subscribing to events. Apollo Server comes with its own by default.

Let’s implement the specific subscription for the message creation. It should be possible for another GraphQL client to listen to message creations.

Create a file named subscription.js in the root directory of your project and paste the following line in that file:

import { PubSub } from ‘apollo-server’;export const CREATED = ‘CREATED’;export const EVENTS = {
MESSAGE: CREATED,
};export default new PubSub();

The only piece missing is using the event and the PubSub instance in your resolver.

…import pubsub, { EVENTS } from ‘./subscription’;…const resolvers = {
Query: {

},
Mutation: {…
},Subscription: {messageCreated: {subscribe: () => pubsub.asyncIterator(EVENTS.MESSAGE),},},};…

Also update your schema for the newly created Subscription:

const typeDefs = gql`
type Query {

}
type Mutation {

}type Subscription {messageCreated: String!}
`;

The subscription as a resolver provides a counterpart for the subscription in the message schema. However, since it uses a publisher-subscriber mechanism (PubSub) for events, you have only implemented the subscribing, not the publishing. It is possible for a GraphQL client to listen for changes, but there are no changes published yet. The best place for publishing a newly created message is in the same file as the created message:

…import pubsub, { EVENTS } from ‘./subscription’;…const resolvers = {
Query: {

},
Mutation: {
createMessage: (parent, args) => {
const message = args.text;pubsub.publish(EVENTS.MESSAGE, {messageCreated: message,});
return message;
},
},
Subscription: {

},
};…

We have implemented your first subscription in GraphQL with Apollo Server and PubSub. To test it, create a new message on a tab in the apollo playground. On the other tab we can listen to our subscription.

In the first tab, execute the subscription:

subscription {
messageCreated
}

In the second tab execute the createMessage mutation:

mutation {
createMessage(text: “My name is John.”)
}

Now, check the first tab(subscription) for the response like this:

{
“data”: {
“messageCreated”: “My name is John.”
}
}

We have implemented GraphQL subscriptions.

Salesforce Data Migration Best Practices

Thursday, January 23rd, 2020

Introduction

Salesforce data migration is the process of moving or migrating Salesforce data to other platforms. The migration is a way of cleaning the data. The data should be:

  • Complete — contain all the necessary details for all users
  • Relevant — Required information should be included
  • Timely — The data should be available when needed
  • Accessible — the data should be accessible immediately
  • Valid — the data should be in the correct format
  • Reliable — the data should be authentic
  • Unique — There should be no duplicate records

The challenge

To migrate data in Salesforce from one organization to another or from one division to another.

The Solution:

Define which method is best suited to import/export your data. Then, understand the most effective practices for organizing and migrating that data.

1. Begin by identifying the data that needs to be migrated.

Choose objects that need to be migrated.

You might want to migrate only the “contact information” from every account, or you might even want to migrate “account information” from a particular division.

2. Create templates for the data that needs to be migrated

An excel template must be created for each Object. This is done using a data export from Data Loader.

Objects have necessary relationships that dictate the order of data migration. So, identify the required fields for each Object.

3. Populate all the templates

Make sure to review the data before populating it in the template.

4. Prepare the destination org

You might want to create custom fields to store legacy ID information.

Optionally, you can give the custom field the “External ID” attribute, and it will be indexed. By doing this, relationships will be maintained, and you can build custom reports for data validation.

For data that is contained in non-standard fields in the old organization, consider creating custom fields.

5. Validate the data

The following techniques can be used to validate the migration:

  • Spot check the data
  • Review exception reports tracking any data that was not migrated
  • Create a custom report to validate record counts and provide you with a snapshot of the migration.

A few words of advice:

Before migrating the data with Salesforce, you should be sure of how the user IDs of the existing database match the new system.

Ensure you have at least a few licenses available for the old instance after the cut-off date. It will be a good idea to have a few months of accessibility left. So, if you face any issues with migration, you can always go back to the old instance and take your time to investigate the issues.

Keep an eye on the space that is being consumed.

Perform testing before rolling out the instance.

Finally, Salesforce data migration is a useful and important task to provide effective data solutions to an organization. However, it must be performed without affecting the quality of the data within the system.

Recent Posts

Recent Comments

Get In Touch

Ask Us Anything ?

Do you have experience in building apps and software for my requirements?

What technologies do you use to develop apps and software?

How do you guys handle off-shore projects?

What about post delivery support?