Authors: Ajeeth Kumar A, Chetan Jaamdar, Gaurav Kantrod According to Gartner, poor data quality could cost organizations up to $12.9 million
Archive for category: Data Engineering
Data Engineering
Harnessing Retail’s Ring of Power with GCP
| Blogs, Data Engineering, Best of 2022, Customer 360, Data Engineering, GCP, Retail Recommender EngineAuthor: Prabhanjana Guttal What do customers really really want: More options? Value for money? Better quality products? Exciting discounts? With
TIGER DATA FABRIC – Self-Service Data Lake Framework in AWS
| Blogs, Data Engineering, AWS, Best of 2022, Data Engineering, data lake, Tiger Data FabricAuthor: Suprakash Nandy “Data is a precious thing and will last longer than the systems themselves.”– Tim Berners-Lee, inventor
Why Organizations Need A Cloud Data Engineering Council
| Blogs, Data Engineering, Cloud Adoption, Cloud Data Engineering, Data Engineering, data management, Industry 4.0Authors: Sundaresan Narasimha Moorthy, Muthamizh Selvan Subburayan At the center of Industry 4.0 (4IR), is the generation of data in
Enabling Data Driven Decisions for Efficient Business Operations
| Blogs, Data Engineering, Big Data, data driven decisions, Data Engineering, data lakeAuthors: Muthu Govindarajan, Manish Gupta Modern Data Lake Data Lake solutions started emerging out of technology innovations such as Big
Modern BI – A key marker for a democratized and efficient BI environment
| Blogs, Data Engineering, Business Insights, Data Engineering, Modern BIAuthors: Riaz Samadh, Saravanan Perumal Recently, at the DES22 Summit held virtually on the 30th of April 2022, we spoke
A Data Engineering implementation using AWS
| Blogs, Data Engineering, Advanced Analytics in Data Engineering, AWS, Data EngineeringAuthors: Sidharth Ramesh Introduction: Lots of small to midsize companies use Analytics to understand business activity, lower their costs and
Simplifying Geospatial Processing Using GeoPandas
| Blogs, Data Engineering, Technical, Geopandas, Geoprocessing, Python librariesAuthor: Karunakar Gumudala Introduction: Spatial datasets always are more than a few gigabytes in size, with millions of records each.
Building a Near-Real Time (NRT) Data Pipeline using Debezium, Kafka, and Snowflake
| Blogs, Data Engineering, AI for Real Estate, Data Engineering, Data PipelineAuthors: Arun Kumar Ponnurangam, Karunakar Goud Institutional investors in real estate usually require several discussions to finalize their investment strategies and
Recent Posts
- Improving Data Quality with the help of Tiger’s Data Quality Framework
- Patient Services: Pharma’s Answer to Supporting Patients with Rare Diseases
- A step-by-step approach to enable Fivetran transformations for dbt
- A Guide to A Hassle-Free BI Platform Migration
- Unlocking Cost Efficiency and Resilience in Supply Chain Management, with Network Optimization
Archives
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- March 2022
- February 2022
- January 2022
- December 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- October 2019
- September 2019
- August 2019
- June 2019
- May 2019
- March 2019
- January 2019
- September 2018
- August 2018
- May 2018
- April 2018
- March 2018
- September 2017
- June 2017
- March 2017
Recent Comments