DATA IN MOTION
Change the way you think about your data.
With Anatella, your data is not “lying around” inside some static “data storage” anymore.
With Anatella, Anybody can create in a few mouse clicks dynamic data flows that reveal the gold in your data.
WELCOME TO ANATELLA
Anatella changes the way we think about data management:
empower business user (self-service data management and analytics), focus on diffusion of innovation,
low infrastructure (big data on one or two laptops), and incredible ease of use.
Anatella is the platform Citizen Data Scientists were waiting for to reach their potential.
What is Anatella
Anatella is the center of the TIMi Analytical platform. At first sight, Anatella looks like a user-friendly data management (ETL) tool but it’s much more than that. With Anatella, you can solve the most advanced machine learning problems. Anatella is also a collaboration platform that allows easy collaboration between “expert coders” and less technical “business” users.Functionalities
Support and training
Quick start guide, video tutorials, documentation, training sessions and other ressources for Anatella.Support center
Download Anatella with the “Timi Community Edition” or the “Business Edition trial”. Free to use. No limit, no constraint.Download Anatella
Fast & User-Friendly Data management
This typically includes: Extract different Datasets from various storage types and various locations ; Cleaning and Validating your Dataset; Compute many different aggregates on your datasets ; Join several datasets together (you know: those two databases that were never supposed to be joined together because they lack a proper common key?) ; Injecting your datasets into a RDBMS, into a BI tool, into a modeling tool, into R/Python.
With Anatella, you can easily perform any data quality and data cleaning tasks on large data volumes (handle tables with several billions rows using only one laptop).
R and Python integration
Program in R and Python without seeing one line of code
A collaboration/communication Framework
Data science teams are composed of a vast variety of people with many different backgrounds and knowledge. Usually such teams include some “Business Analysts” (that typically don’t want to see any line of code) and some “Expert Coders” (that enjoys coding). “Business Analysts” are usually driving the demands and need to explain to the “Expert Coders” the nature of the problem that they want to solve. These explanations very often lead to many frustrations because of the lack of a proper support (or common language) so that these two groups of people can communicate efficiently. Anatella finally arrives with a solution: i.e. A common language that both Analyst and Coders can use to communicate “smoothly”, exchange ideas, and make progress all together, in a collaborative way, to arrive to the desired results. This is all made possible thanks to the abstraction layer offered by Anatella around the code (this code is written in R/Pyhton/JS/C++). Thanks to this abstraction layer, the “Analyst” don’t see any line of code, while, at the same time, the Coders are still writing code. …And both groups can still “think together” and bring meaningful contributions on how to solve the problem at hand.
Simple “Static” Reporting
Anatella is fully integrated with the MS-Office Suite. In a few mouse clicks, Anatella can read, write or update MS-Excel .xlsx files at very high speed (using its own proprietary code). Anatella allows you to automatically update all the charts and graphs of all your MS Office reports (without using any un-maintainable VBA scripts!). For example, Anatella can automatically update the charts in your Powerpoint report, for easy reporting to your C-Level.
BI integration and OLAP
Anatella is the ideal companion to any & all BI tools (such as: Tableau, Qlik, Kibella and Kibana). Anatella automatically generates and updates the datasets used inside your BI solution. Anatella can natively create at high speed the internal, proprietary file format used by Tableau, Qlik, Kibella and Kibana. It means that refreshing the data “behind” your dashboard has never been so easy.
Built for Machine Learning
Predictive Analytics projects have many differences compared to “classical” data management projects. They are characterized by the size of the manipulated tables: i.e. It’s very common to have “analytical datasets” (i.e. tables) that contains several thousands of columns/variables. For example, our pre-made solution for Telecom contains a customer-view with 3000 variables. The Predictive Analytic solution developped by Sura (the largest Insurance in Lat.Am.) contains a customer-view with 23000 variables. This is why, contrary to all other tools, the Anatella’s interface always stays 100% responsive, even for these high-column-count tables. Anatella offers many other functionalities tailored for Predictive Modeling (advanced feature engineering, meta-data-free transformations, automated text mining, easy pivoting, etc.) and missing from other solutions.
Built For Iterative Work
BI, Analytics and Predictive Analytics projects are characterized by their “exploratory” nature: i.e. Before starting such projects, you don’t know in advance the “right” KPI’s to compute or the “right” features to create. This means that Analytics Projects usually involves several “iterations” (e.g. at each iteration your KPI’s are “refined”, your “features” discriminates more, your dataset gets cleaner). Thus, you need a Data Management Tool that allows you to quickly “iterate” over different variations of your data transformations.
Classical ETL tools are designed for “Data Migration” tasks where the job is usually 100% specified and you cannot deviate from it. This means that these Classical ETL tools do not allow you to quickly iterate because they make it difficult and painful to modify an already existing data transformation (i.e. they are not built for iterative work).
Thanks to the unique “meta-data-free” feature and the unique “cache system” of Anatella, Anatella is currently the only ETL solution that allows you to quickly iterate without (too much) suffering.
Built for Advanced Machine Learning (e.g. Graph Mining & Text Mining)
Anatella is the only solution able to run SNA (Social Network Analytics) and Graph Mining algorithms on the very large graphs available inside the telecom & banking companies. Using Graph Mining algorithm, detect Communities, detect Social Leaders, compute Graph kernels, etc. on graphs with several dozens millions of nodes and several billions of arc.
Anatella also handles flawlessly the most difficult Text Mining problems. Categorize automatically a large corpus of multi-lingual documents without loosing time creating a dictionary. Find, single-out and Extract specific business-related-entities out of complex documents (such as a specific price, lost among many other numbers inside the documents). Anatella has natively very strong Text Mining capabilities that can be even more extended thanks to the R/Python integration.
Anatella not only solves the “data in motion” problem but it also solves the “data storage problem”. Thanks to two proprietary file formats (.gel_anatella and .cgel_anatella), Anatella can store vast amount of data without requiring much diskspace. For example, you can store a 5TB RDBMS data base in less than 100GB of hard drive space and still have incredible I/O performances while manipulating your datasets.
Furthermore, from within Anatella, the Hadoop HDFS drive is seen as a “normal”, local drive (e.g. you can read/write/browse your HDFS drive “as if” it was a local drive): It means that it’s as easy to store all your dataset files on a HDFS drive, as to store your files on a highly efficient NAS/SAN/RAID6/”C:” drive. An HDFS drive offers unlimited storage at a low price.
Complete Hadoop Integration
Thanks to the direct Hadoop Integration, creating a centralized Date Lake on the HDFS drive with Anatella is a breeze
All the tools inside the Hadoop ecosystem are reading and writing their data inside files inside the HDFS drive. The file format of the files saved on an HDFS drive are, typically: Text files, .parquet files, .avro files (and also .gel_anatella or .cgel_anatella files when using Anatella)
Anatella is the only data management tool inside the Hadoop ecosystem to be programmed in C (all other tools are coded in Java). In particular, Anatella is the only tool to use low-level C code to read/write parquet files directly from/to a HDFS drive. This means that Anatella is several orders of magnitude faster than any other tool inside the Hadoop ecosystem.
Anatella connects natively to any “legacy” data source (e.g. old SAS .sas7bdat file, old AS-400 Main Frame files created in Kobol, old DBase or FoxPro databases, etc.) and it also connects to the “new wave” of Big Data solution (HDFS) and IoT solutions (real-time). This means that Anatella is the perfect tool to integrate all these different (new and old) technologies.
For example, with Anatella, in a few mouse click, you can feed all your “HDFS datasets” inside your Data WareHouse (based on Oracle, Teradata, etc.) and the other way around.
Straightforward&Fast Industrialization of R&D findings
Once a data transformation is ready for industrialization, you can deploy it on your (pre-)production server/cluster in a few mouse-clicks.
Integration with any scheduler tool is easy (for daily/weekly/monthly automated runs). In particular, Anatella has been thoughtfully tested with the famous Jenkins scheduler. You can easily use Jenkins to schedule all your Anatella jobs. Jenkins is 100% free. Jenkins is also one of the most easy-to-use, most versatile and stable scheduler.
IoT integration and Real-Time streaming
Anatella data transformations can work both in classical “batch mode”, but also in real-time streaming mode (although not all data transformation operators are available when working in streaming mode). This means that direct bi-directional connections to common IoT brokers such as Kafka, RabbitMQ, Mosqitto is straightforward and easy. Thanks to the load-balancing included in such brokers, Anatella can sustain a practically unlimited amount of simultaneous connections (just add more nodes if you need more speed).
No Cloud, No In-Memory, No “Hype” Bullshit Keyword
The technologies used in Anatella are selected based on their efficiency rather than on their “hype” factor. This means that Anatella is not an “in-memory” solution: The dataset sizes that you can manipulate with Anatella are only limited by the size of your (local or HDFS) drive (and not by your RAM). This “limit” is furthermore reduced thanks the usage of a highly-compressed proprietary file format available in Anatella (.gel_anatella and .cgel_anatella files) that allows to store Terabytes of data in a few dozen gigabytes.
This also means that Anatella do not rely on a large “cloud” infrastructure to provide the necessary computing power to process these vast amount of data. Of course, you can still use Cloud/Distributed computation (or storage) with Anatella but, 99% of the time, the Anatella’s engine is so efficient (both in terms of computation and storage) that one 2K€ laptop is more than enough to handle all the tasks at hand.
To explain the difference in efficiency between Anatella and other “big data” solution, we can say that “To the best of our knowledge, Anatella is the only “Big Data” solution that is developped 100% in C/Assembler rather than Java”.
Non intrusive & Easy deployment
You can install and run your data-transformation scripts on any Windows-based PC (or inside “Vine” on Linux).
Anatella has a small, automated & easy wizard-based installation system that installs Anatella in less than a minute. You can run your Anatella data-transformation Scripts on any PC, even with restricted system privileges.
There even exists a portable version of Anatella that requires no installation (i.e. it’s a simple ZIP file to unzip). The portable version guarantees that no bloatware will ever be installed on your machine when “installing” Anatella since there is no real installation: i.e. All you do is unzipping a ZIP file. The portable version is also very handy if you are working as an external consultant at your client site and want to quickly process some data. With the portable version of Anatella on a USB stick, you can run your Anatella data-transformation graphs on any PC, even with very restricted system privileges. There is no excuse to not use Anatella everywhere!
Low licensing cost
There are no licensing fee based on the volume of processed data (i.e. There is no “data tax”). Furthermore, the Community Edition of Anatella is totally free and covers more than 95% of the usual business cases.
You only pay for additional functionalities, maintenance and support. Not for volume. This type of licensing model is important for growing businesses, with an increasing amount of data. If your business grows, with Anatella, you can still process your data as often as you need. This is in opposition to cloud-based solutions: With cloud-based solutions, the more data you have, the more you pay… Even if “cloud” seems cheap at first sight, the bill increases so quickly that you’ll be bankrupt without even knowing why! We already witnessed that phenomenon countless time: It’s very common to see a small startup that fails because of its Amazon/Azure bills (…and, of course, Amazon will be the last one to warn you about that!).
Create now un-rivaled Timi predictive models on large graph based data sets using LinkAnalytics. The ultimate solution to extract advanced Social Network Algorithms metrics out of gigantic social data graphs.
We reduced by 10% the churn on the customer-segment with the highest churn rate.
The Timi Suite includes a very flexible ETL tool that swiftly handles terabyte-size datasets on an ordinary desktop computer.