Data Engineer
Job details
About Turvo: Turvo provides a collaborative Transportation Management System (TMS) application designed specifically for the supply chain. Turvo Collaboration Cloud connects freight brokers, 3PLs, shippers, and carriers to unite supply chain ecosystems, delivering outstanding customer experiences, real-time collaboration, and accelerated growth. The technology unifies internal and external systems, providing one end-to-end solution that streamlines operations, enhances analytics, and automates business processes while eliminating redundant manual tasks. Turvo’s customers include some of the world’s largest Fortune 500 logistics service providers and shippers as well as small to mid-sized freight brokers. Turvo is based in Dallas, Texas, with offices in Hyderabad, India. (). Role : As a Data Engineer, you should be an expert in the architecture of DW solutions for the enterprise using multiple platforms. You should excel in the design, creation, management and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse / lake for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes to support the rapidly growing and dynamic business demand for data, and use it to deliver the data as service which will have an immediate influence on the day-to-day decision making. You should have the ability to develop and tune SQL queries to provide optimised solutions to the business . Responsibilities:
- Expertise in designing and implementing scalable data pipelines (e.g., ETL) and processes.
- Experience building enterprise-scale data warehouse and database models end-to-end.
- Proven hands-on experience with Snowflake and related ETL technologies.
- Experience working with Tableau, Power BI or other BI tools.
- Experience with native AWS technologies for data and analytics such as Redshift, S3, Lambda, Glue, EMR, Kinesis, SNS, CloudWatch, etc.
- Experience with NoSQL databases (MongoDB, Elasticsearch).
- Experience working with relational databases and awareness of writing & optimising SQL queries for analytics and reporting.
- Experience developing scalable data applications and reporting frameworks.
- Experience working with message queues, preferably Kafka and RabbitMQ.
- Ability to write code in Python, Java, Scala or other languages.
- 3+ years experience in architecture of DW/Data Lake solutions for the Enterprise using multiple platforms.
- Experience writing high quality, maintainable SQL on large datasets.
- Expertise in designing and implementing scalable data pipelines (e.g., ETL) and processes in Data Warehouse/Data Lake to support dynamic business demand for data.
- Experience working on building/optimising logical data models and data pipelines while delivering high quality data solutions that are testable and adhere to SLAs.
- Excellent knowledge and experience of query optimisation and tuning.
- Knowledgeable about a variety of strategies for ingesting, modelling, processing, and persisting data.
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.