Tech Archives - Ekino global

Cet article est également disponible en français.

Take the examples of Deliveroo, Uber Eats and Frichti: what we believed was a good experience five years ago – to have a warm margherita delivered to our door in 45 minutes – is today viewed as a little disappointing. Through simplifying the ordering process, providing a wider menu, and having exceptional customer follow-up, these services have completed the satisfaction circle to such an extent that our overall expectation has permanently risen to a higher level.

Companies like these impose new standards of experience that create great satisfaction because their focus is squarely on customer needs. By studying and analyzing these standards, we can learn to optimize customer satisfaction and apply the principles to our own activities.

However, these projects can be complex because, often, the entire service paradigm needs to be redesigned. Satisfaction is resolutely business-critical because it helps to strengthen the brand image and assists in the acquisition of new customers and the retention of existing ones. Customer experience and satisfaction measurement should be a champion of corporate culture and no longer limited to a simple KPI.

Blasting

Cryptocurrency mining is a well-known use case: the crypto-currency security mechanism ‘Proof of Labor’ requires a large amount of electrical energy, so as to ensure that the cost of an attack on the network is greater than the potential gain of the network. In order to have enough computing power allocated to securing the network, the ‘miners’ are rewarded by the cryptocurrency network. Many companies were created around this activity which became known as ‘cryptocurrency mining’. Now, in order to reduce energy costs, a number of these companies are favouring renewable energies, capitalising on surplus energy produced in renewable power plants to obtain lower rates.

This can be taken into account in ROI calculations for renewable power plant projects: instead of losing energy produced but not used, it can be used to mine cryptocurrencies and therefore make it a more profitable investment. The market is real for the energy sector, and has been developing over several years.

Traceability

One challenge of energy consumption is determining and monitoring the source of electricity that is being consumed. In the context of green contracts, for example, this can be problematic: how is it possible to guarantee to the consumer that the energy she consumes comes directly from renewable sources? Two companies, Engie and Ledger are working together on a device that guarantees the source of green energy that is produced by storing it on a blockchain which makes it possible to provide transparency to users.

These traceability capabilities can also be useful for local loops – peer-to-peer power generation networks – allowing users to exchange energy with each other. In addition, they can simplify exchanges between different operators, for example, in the case of sales of resources between countries.

It is already possible to industrialize these projects on a certain scale as shown by the collaborative work of Engie and Ledger, and the need is real for consumers. In addition, the prospects related to traceability are numerous, so this is a topic of particular interest today.

Monitoring

Combined with the Internet of Things (IoT), Blockchain technologies can help set up an intelligent monitoring system, which will reduce intervention time during a failure or even anticipate them. IoT accurately measures the data specific to each device, and Blockchain technologies, through their peer-to-peer protocol, can guarantee the integrity of the transmitted data. Imagine, for example, a hacker connection on a power line with a monitoring system consisting of only one IoT box: if the hacker knows about the case, he will be able to bypass it, or send false information. If we integrate this single box with others so that all the boxes of the network validate the integrity of the data – the same validation model that the Bitcoin nodes validate the blocks built by the minors – the boxes of the other equipment will then be able to reassemble the information that the equipment sends. Erroneous information will be quickly detected.

The scalability problems inherent in Blockchain technologies still make the implementation of this type of system complex, but solutions are beginning to emerge. Projects such as IOTA seek to address issues specific to interactions between Blockchain and IoT.

These are just a few cases of Blockchain technologies being used in the energy sector, there will be many more that are yet to be identified. If you would like to find out more, do not hesitate to contact us.

Let’s renew the joy! 5 tips on how to optimize customer satisfaction

Take the examples of Deliveroo, Uber Eats and Frichti: what we believed was a good experience five years ago – to have a warm margherita delivered to our door in 45 minutes – is today viewed as a little disappointing. Through simplifying the ordering process, providing a wider menu, and having exceptional customer follow-up, these […]

Lire la suite de Let’s renew the joy! 5 tips on how to optimize customer satisfaction

SFMC – powerful yet complex

It’s not necessarily Salesforce’s fault. Far from it, in fact. No, the CRM landscape has become more complex (driven, in part, by a digital scape that is touching every aspect of customers’ lives), and with it so too has the complexity of SFMC.

Let’s cut to the chase. SFMC is a powerful cloud-based suite of tools that enables companies to create and manage customer journeys across multiple touchpoints and channels for optimum results. However, this means that SFMC incorporates every aspect of the customer journey, from multi-channel , audience profiling, personalisation, interaction management, to name a few. As you may be finding, it’s this complexity that can often hinder efforts in implementing and delivering such journeys.

SFMC has a single purpose – to deliver an optimal customer experience across a diverse range of media and channels. The platform comprises multiple components – Journey Builder, Audience Builder, Personalisation Builder, Content Builder, Analytics Builder, across Email Studio, Mobile Studio, Social Studio, Advertising Studio and Web Studio – all requiring precision set up, fine tuning, and integration to get customer experience moving.

What you need is a capable and knowledgeable team who possess interoperable mindsets, skills and visibility across technology, marketing and processes.

And this is where things get tricky

On the marketing side, skills and expertise are needed in CRM, Customer, CX, Data, KPIs, Control Groups, Comms, Journey Design, Measurement Frameworks and Test and Learn plans. And on technology: SQL, APIs, Data, Data Signals, Scheduled Automation, Data Layers, Attribute Groups and Data Extensions. Put these all together and we’re in a world of acronyms, cross-purposes and potential confusion.

Ergo, the challenge is two-fold. The first is human capital. A pressing demand to access the skills and knowledge of the right people – whether in-house or outsourced – who can optimize those tools. And the second is making these skills available at the right time during the process – which could involve pre-training or recruiting, with the extended lead times that necessitates.

Getting back on track with ekino

At ekino our Salesforce expertise spans bespoke integration to customer journey definition to data dissimilation and analysis, helping to unlock return on investment for our clients.

We combine data, technology, creativity and design to deliver meaningful experiences across all channels.

And we do all this with a holistic approach, inspired by insights and all beautifully crafted to really connect with your customers.

We’ll make sure that the quality of your data assets is sustained throughout their lifecycle. You don’t want garbage in, garbage out, right?

SFMC can handle the most contextual data you can throw at it. However, it’s only going to perform well if your data governance is up to scratch – quality, usability, availability, security and consistency. We’ll take a look at that as well.

Finally, we’ll help define your company’s ongoing customer experience strategy. CRM – by its very nature – has always been a long haul. But with an automation tools like SFMC taking care of all the grunt work, you can concentrate efforts on the long-term strategies you need to gain real and prolonged advantage over your competitors.

Why TypeScript?

Cet article est également disponible en français. Each of those people will have its own experience regarding a new language and its own expectations (but they are of course often shared). Some will focus on improving developer experience, others on the delivery of a website with less critical bugs as possible. Some will also think about recruitment issues for hiring developers […]

Lire la suite de Why TypeScript?

Ahh passwords! Fruit of our imagination, we like to write them down on post-its and always use the same one.

We are doing that because passwords are boring.

Boring to find, boring to memorize, boring because they should be unique, and because eventually, we never know why those stupid forms are yelling at us.

giphy-passwords

But it’s necessary

Security is not about living underground or having a fortress to protect your Tinder account.

Remember the joke: “If a bear attacks you and your friend, you don’t need to outrun a bear, just need to outrun your friend.”?

Well, that’s security! It should be easier to hack your neighbor than you. So we must choose our passwords wisely: Long, with different sorts of characters, and unique.

Build your own passwords

That’s it, passwords seem boring and difficult to find and memorize but hey, they don’t have to be! You could find a random picture, find objects in it and add them in a sentence. Like this:

  • cat
  • glasses
  • bow tie
  • computer
giphy-cat

Add them together: “A Cat with Glasses and Bow Tie on a Computer”. Next, add some special characters and you have a nice password !

%A C4t with Glass3s and Bow Tie on @ Computer%

You can even use your own secrets as passwords:

  • I love U 3 thousands!
  • I always wanted to say that I’m [put something here] 🙂
  • Always take a towel to travel

Voilà!

How could they be less boring but still hard to hack? Here comes your hero: The password manager.

Password manager

A password manager will create and store your secure passwords for you. You don’t have to memorize strong passwords. You can just copy/paste them! Securely.

Password managers can be free or paid. They can run locally (you own your data) or on the cloud (your data is hosted on someone else’s computer, but is always available).

Choosing one will depend on your use and how much you value your digital data. Here is a comparison of password managers.

It’s that easy. After that you’ll love passwords!

Remember that nothing is unhackable, it happens every month. Even Google, Facebook or Apple have been successfully attacked and it will happen again.

Don’t forget that SMS can be hijacked, cards stolen and fingerprints duplicated (Did anyone really think facial recognition was the future of security, when most of us have already 40 high resolution selfies publicly available on Facebook?).

But those technologies combined with passwords offer a greater barrier against attacks. The umbrella term for that is: 2 Factors Authentication.

E.g.: Your bank can provide you a pin generator to use each time you want to purchase something on the internet.

Conclusion

Nothing is perfect, but, brick by brick, we can create a secure path for everyone.

How to use a chat-bot to stand out?

Today’s e-commerce websites need a very good user experience to catch the clients. The user experience will determine how the client feels while navigating on the website and remembers this experience for future purchases. Or it can determine how the client is impacted by our website in comparison to the others and help him remember […]

Lire la suite de How to use a chat-bot to stand out?

The importance of the price calculation engine (PCE)

For a successful website, e-commerce oriented, one of the main points is the price. All way long of the client navigation, the price must remain coherent with the items selected. Changes on the price from one page to another will create some frustration and uncertainty on the client, which will be translated into mistrust. This mistrust will be translated into a bad experience and the bad experience will make the client never come back. But a client who won’t come back isn’t just a single sale which is not made but also all the potential clients in its neighbour. Here is how a single block can deeply impact the profit of a website.

Its collapse

Why does the PCE so many times turn into a black box in the development of a website? At the rate which a today’s website grows up, the functional features which define the PCE also follow that rate. This means a continuously increasing complexity, a deprecated history, and a knowledge which can’t follow that rate. Little by little, the PCE becomes a black box, and nobody wants to work with it. Nobody really knows how it works, but everybody knows that a single modification will end up with a long list of problems.

Following, I’ll describe how I’ve handled this problem for a website which allows you to obtain an online quote for a vehicle repair or maintenance. Our PCE contained multiple reparations type (tyres, glasses, mechanical or body), but the most tricky part was the discounts (applicable by repair type, by minimum amount, by convention, can be accumulated or not…). The problem was that: sometimes, the prices were not coherent from the beginning to the end, adding a new reparation type or a promotion was very complicated.

Its recovery

Now, I’m going to expose some steps I’ve followed to solve our case: make the PCE more manageable. Our PCE has grown too fast, too many rules appeared without thinking about the conflicts. Which made some kind of unmanageable monster. In the beginning, everything starts with a good conception, it’s later when multiple topics are in conflict.

Isolation

Most of the time, the black box doesn’t really exist as it is. But there are distributed pieces of the PCE all over the application. This causes some problems:

  • Find the errors: as there are multiple places where to look (and some are hidden);
  • Understand its behaviour: it’s complicated to correctly follow the execution workflow;
  • Hard to test: it has too many dependencies.

To try to solve those problems, we have to first try to sweep under the same carpet: have all the PCE in a single place. It was ugly, but it wasn’t the goal. If we almost can have all the rules at a single place, we will have defined boundaries with nothing more inside. This was our first victory.

Hexagonal Architecture

By having all the logic in the same box, we can now see that the PCE requests multiple other services. The PCE obviously needs a lot of information (discounts information, spare parts prices, hourly rates…) from the rest of the services to deliver the correct price.

Where is the problem now?

  • Too much entry points;
  • Too many dependencies;
  • Context too complicated to request the PCE.

How the Hexagonal Architecture will help us? The Hexagonal Architecture describes the independence of each part of the system, which means that the PCE shouldn’t need any other part to work correctly.

How to reach this?

  1. I’ve identified all the data/information that the PCE needs;
  2. I’ve identified all the entry points to the PCE;
  3. I’ve created an object which includes all the information identified at the first point;
  4. I’ve modified all the entry points listed at point 2 to accept only the object created at point 3.

This way, all the entry points will have the same format, but even better, it will be easier to test. We had a big object to request the PCE, and it requested too much information (sometimes unnecessary), but that’s not our problem at the moment.

Unit tests

Here we are, the step which will grant us that the future modifications won’t impact the old rules. But this step is far from easy. Here is the sequence I’ve followed to put in place the needed tests:

  1. Creates a single test per repair type with the basic parameters;
  2. Add more tests with the most common parameters and discounts;
  3. Continue with the less common parameters and discounts;
  4. Add the extreme cases;
  5. Add the impossible cases (if it’s impossible, we must ensure that the client is aware of it).

This was a long task. I’ve needed to talk with several people to understand some functional rules and investigated the history of functional requests. I’ve also found some existing problems between the expected and the actual result. But I didn’t waste my time trying to correct them, I will handle them later. In the end, I can grant the stability of the PCE.

Documentation

Now that I’ve made a big work of investigation of the past of the PCE, I must not discard all this information. I should document as much as possible the system now. But where? The best place remains inside the application. If the documentation and the algorithms are separated, they won’t evolve together. Some complex parts were described just inside the code, but most parts of the application were described with the tests. We must ensure that the mechanisms are well described and understood (and not only by the one who wrote them).

Refactoring

Last step. I’ve already succeeded to isolate the PCE. I’ve also succeeded in stabilizing the modifications. And finally, I’ve documented the system to increase the readability. Now it’s time to rethink about the logic, find an architecture which best matches the current needs and future features. It’s time to delete old rules which are outdated.

There is no procedure I can give for this task because it will depend on the architecture, on the technologies used, and in so many more aspects. Nevertheless, the system is now stable and ready to accept a big refactoring of the PCE without fear.

Final words

In the end, we succeed to have a stable PCE:

  • The tests ensure the non-regression;
  • The documentation allows a quick onboarding and eases the modifications;
  • The isolation help in the integration of new repair types;
  • And some other advantages.

Maybe in some years, the current architecture will be deprecated, but as the system is isolated, it won’t generate more headaches.

Procedures like the one I’ve described in this article can be used to stabilize and refactor any part of a system. I’ve mainly been focused on the PCE because it’s one of the most sensitive part, one of the most business important. The procedure I’ve described will ensure the stability, show hidden features/problems, and allow you to start the refactoring of any critical part.

Improvisation in Project Management

Transformed by improv I have been doing improvisational theater as a hobby for many years, and though it has undoubtedly helped me professionally from the very beginning – I became more confident in meetings, listened more, reacted more quickly – I have been feeling lately that, besides the general surface skills (listening, talking to a […]

Lire la suite de Improvisation in Project Management

The health crisis that we are currently living in has put to test many of our technical infrastructures (training sites, video, food e-commerce…) by causing an exceptional overload which can be difficult to bear.

Even if it is difficult to draw a definitive assessment, it appears that platforms and services relying on the elasticity offered by public cloud providers have generally fared much better than average. Indeed, the observed violent spikes in connections (x100 for some MOOC platforms) are much easier to handle when one is not limited by the physical capacity of one’s own datacenter, and when services are designed to scale dynamically.

The upcoming unprecedented economic crisis will provide an opportunity to take advantage of another advantage of the cloud, by optimizing its costs, again through elasticity, but also through a few architectural elements. Here are a few examples and tips on how to best optimize your cloud services.

Identify owners

The cloud brings so much ease and promise of innovation that very quickly every company is faced with multiple environments, projects and resources that are sometimes difficult to map. Who has never been trying to track the person who started (and forgot) a “test” VM?

The sine qua non condition to be able to act on costs is to know how to link resources to their owners (physical person, project, business unit, …). Otherwise you quickly find yourself unable to act.

To do this as soon as possible, it is necessary to set up governance and deployment rules: how, for example, will we structure GCP projects? How do you tag the resources of different sub-projects on the same AWS account?

Depending on your cloud provider, different tools will also be available to audit and automatically correct missing information. For example, AWS Config can automatically shut down any VM that does not have the tag identifying its Business Unit owner.

Turn off & delete

Depending on your architecture, infrastructure services and specifically virtual machines can represent a significant part of the bill. On AWS it is quite trivial to automate the scheduled shutdown and reboot of these virtual machines and RDS instances using a Lambda function (e.g.https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/). Thus development servers can be shut down between 8pm and 7am, as well as on weekends, leading to a saving of around 50% (storage for example continues to be billed).

More and more managed services also offer these features (e.g. RDS and more recently Redshift, also from AWS).

Infrastructure as Code tools (Terraform, Cloudformation, Troposphere, CDK…) are also key in this context. They allow you to build a complete environment in a few minutes. Therefore, if one can easily recreate, one can remove what is not immediately useful (for example a project frozen for 6 months).

Resize

In terms of elasticity, not all applications are equal, depending on whether we are looking at a “native cloud” or at a “lift and shift” architecture. In the second case, the infrastructure is often composed of virtual machines hosting monolithic applications and relational database services (RDS at AWS, SQL at GCP, …) which cannot necessarily benefit from autoscaling.

It is therefore advisable to examine the resource consumption (CPU, memory, disk) of each server to identify potential savings.

Be careful to use the right tools for this: The AWS Trusted Advisor, for example, is based on the CPU and network consumption of virtual machines, but does not take into account the memory consumption of applications. It is therefore necessary to complete its analysis with the appropriate metrics, or use specialized tools.

Re-architect

Different paths are possible. The first one is most of the time to decouple the components of the application in order to make them scalable, as we explained here https://www.ekino.fr/articles/performance-et-scalabilite-des-services-numeriques-a-lheure-du-teletravail. Beyond these steps, a more in-depth overhaul of the architecture is possible in order to optimize resources. Many cloud managed services charge on a per-use basis, and therefore in a more linear way than an infrastructure that is permanently on.

Thus,

  • A GCP cloud-storage can in many cases replace a VM running Nginx
  • API-Gateway and AWS Lambdas can replace a Java server
  • An AWS Simple Queue Service queue can replace a RabbitMQ server.

This is obviously a simplified vision (the functionalities are never totally equivalent) but depending on your consumption it is always interesting to study these opportunities. Beyond the reduction of the “cloud” bill, it is also a paying investment in the medium and long term on the human costs of operation.

Explore service options and pricing

For each of the services implemented, the topic of the billing model and the options activated must also be reviewed. Google Cloud Storage offers 4 classes of storage depending on the access needs (Standard, Nearline, Coldline, Archive). Logs kept several years for regulatory reasons can be stored in “Archive” dividing the cost by 6.

AWS Relational Database Service offers high availability databases, but this option doubles the price of the service. Is it necessary to activate it in a development environment?

AWS also offers numerous price options for EC2 virtual machines: standard, reserved instances, saving plan, spot instances… This last option, particularly suited for batch processing, easily brings cost reductions of 60% and more.

Finally, it is always useful to pay attention to network transfer costs. Depending on whether the flows are intra zone, intra region, public, and the services they link, prices vary greatly. Whenever possible, optimizing the network architecture will have the triple effect of reducing costs, improving performance and security. A classic example is the activation of VPC endpoints (S3 in particular) at AWS.

Using PaaS

AWS, GCP, Azure offer great flexibility thanks to their many services. But this is often at the price of technical and organisational complexity. For simple projects, Platform as a Service such as Platform.sh or Clever Cloud are perfectly suited. Thanks to standardised environments, they are very quick to implement, offer git-based deployment workflows, and hide a large part of the complexity (and therefore costs) of the run.

To get started

As we’ve seen, there are many leads to investigate, especially since your cloud ecosystem is probably varied. One of the most effective approaches is to bring together for a workshop the technical managers and product owners of each platform to analyze together the expenses. The AWS cost explorer is an excellent tool for this purpose. In a few clicks it allows to understand the distribution of costs, and if necessary to go down in great details (more so if the resources are correctly tagged).

Screenshot 2020-04-21 at 11.57.17

Around this tool, and in small groups, a brainstorming session lasting about thirty minutes is held to identify ways of optimizing the architecture, starting with the largest cost centers . The ideas are then shared and debated to come up with a plan to which everyone commits, in the manner of an agile sprint planning.

This way, in an iterative way, each team can take ownership of the subject and responsibility for its financial impact, thus starting a “FinOps” approach.

In conclusion

The consequences of this crisis will be multiple, and are still difficult to draw, but it is clear that companies will have to focus even more than yesterday on the value they bring to their customers. Services must make sense, be resilient, but more than ever, they must be economical and even frugal.

It will be interesting in the coming months to see how companies will review their cloud strategy for the future: streamlining services to leverage economies of scale, investing in serverless technologies, focusing on “sovereign” clouds, … many options are possible. But in any case, the fundamental movement of the cloud remains inseparable from the agility required to get out of the crisis in the coming years.

Covid-19: Performance and scalability of digital services in the age of massive remote working

Monday, March 16th, 2020, 9 a.m. My daughter sits down in front of the family computer to open the Digital Workspace (ENT), the digital platform that connects students to their teachers and families to schools. Aim of the game: to continue her distance schooling, as announced by the Minister of Education. But she will get […]

Lire la suite de Covid-19: Performance and scalability of digital services in the age of massive remote working