We use cookies to improve the user experience, analyze traffic and display relevant ads.
Details Accept
Enter position

Data Engineer Salary in India - PayScale

Receive statistics information by mail
Unfortunately, there are no statistics for this request. Try changing your position or region.

Найдите подходящую статистику

Analog Design Engineer

Смотреть статистику

ASIC Verification Engineer

Смотреть статистику

Big Data Engineer

Смотреть статистику

Citrix Engineer

Смотреть статистику

Electrical Site Engineer

Смотреть статистику

ICT Engineer

Смотреть статистику

Ios Engineer

Смотреть статистику

It Support Engineer

Смотреть статистику

IT Test Data Engineer

Смотреть статистику

L2 Engineer

Смотреть статистику

L2 Support Engineer

Смотреть статистику

Linux Engineer

Смотреть статистику

Mechanical Site Engineer

Смотреть статистику

Middleware Engineer

Смотреть статистику

Network Operations Engineer

Смотреть статистику

Noc Engineer

Смотреть статистику

Plc Engineer

Смотреть статистику

Rtl Design Engineer

Смотреть статистику

Senior Database Engineer

Смотреть статистику

Senior Network Engineer

Смотреть статистику

Senior Service Engineer

Смотреть статистику

Senior Verification Engineer

Смотреть статистику

Signal Integrity Engineer

Смотреть статистику

Software Configuration Engineer

Смотреть статистику

Technical Solutions Engineer

Смотреть статистику

Technical Support Analyst

Смотреть статистику

Test Data Engineer

Смотреть статистику

Utility Engineer

Смотреть статистику

Verification Engineer

Смотреть статистику

Web Engineer

Смотреть статистику
Show more

Recommended vacancies

Data Engineer
Amazon, Bengaluru, KA, IN
DESCRIPTIONAmazon Web Services (AWS) provides a highly reliable, scalable, and low-cost cloud platform that powers thousands of businesses in over 190 countries. AWS’ Infrastructure Supply Chain & Procurement (ISCaP) organization works to deliver cutting-edge solutions to source, build and maintain our socially responsible data center supply chains. We are a team of highly-motivated, engaged, and responsive professionals who enable the core sustainable infrastructure of AWS. Come join our team and be a part of history as we deliver results for the largest cloud services company on Earth !Do you love problem solving? Do you enjoy learning new ideas and apply them to problems? Are you looking for real world engineering challenges? Do you dream about elegant high quality solutions? Want to be a part of an amazing team that delivers first class analytical solutions to our business world-wide?AWS is seeking a highly motivated and passionate Back End Data Engineer who is responsible for designing, developing, testing, and deploying Supply Chain Application and Process Automation. In this role you will collaborate with business leaders, work backwards from customers, identify problems, propose innovative solutions, relentlessly raise standards, and have a positive impact on AWS Infrastructure Supply Chain & Procurement. In this, you will work closely with a team of Business Intelligence Engineers and Data Scientists to architect the application programming interface (API) and user Interface (UI) in context with the business outcomes. You will be using the best of available tools, including EC2, Lambda, DynamoDB, and Elastic Search. You will be responsible for the full software development life cycle to build scalable application and deploy in AWS Cloud.In this job, you will:- Work with business leaders, Business Intelligence Engineers, and Data Scientists to ideate business friendly software solutions.- Design client-side and server-side architecture.- Develop visually appealing front end website architecture, including translating designer mock-ups and wireframes into front-end code.- Develop functional databases, applications, and servers to support websites on the back end.- Write effective APIs.- Test software to ensure responsiveness and efficiency.- Troubleshoot, debug and upgrade applications.- Create security and data protection settings.- Build features and applications with a mobile responsive design.- Develop technical specifications and write technical documentation.We are open to hiring candidates to work out of one of the following locations:Bengaluru, KA, INDBASIC QUALIFICATIONS- - Bachelor's or Master's degree or PhD in Computer Science, or a related field.- - 6+ years of non-internship professional software development experience with 3+ years of technical leadership experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems- - Experience with client side technologies (HTML, CSS, JavaScript, jQuery, XML, Angular, React)- - Knowledge of multiple back-end technologies (e.g. C++, C#, Java, Python, Node.js) including object-oriented design- - Familiarity with SQL and NoSQL databases (e.g. MySQL, MongoDB, PostgreSQL, Dynamo DB), web servers (e.g. Apache, Apache Tomcat)- - Proficient in mapping the object oriented model to relational database(Hibernate and JPA)- - Experience with DevOps tools like Docker, Kubernetes with application deployment using CI/CD.- - Experience with distributed version control such as Git.- - Basic knowledge of Linux environments including shell scripting and standard Linux command line tools.PREFERRED QUALIFICATIONS- - Experience working with REST and RPC service patterns and other client/server interaction models and API design.- - Experience working in NoSQL databases like MongoDB , DynamoDB- - Experienced with an Agile software development- - 5+ years of experience as a full stack engineer or back end engineer.- - Experience developing complex software systems that have been successfully delivered to customers.- - Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.- - Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.
Data Engineer
, bangalore, IN
Department:Global Security Operations (GSO)At Novo Nordisk, we strive to be at the forefront of digital healthcare and to succeed with this, we need many bright minds. Working directly or indirectly with all parts of Novo Nordisk Globally, you will play a significant part in leading Novo Nordisk towards the future of digital healthcare securely. Do you have what it takes to succeed in this role? Then, join a growing team, working in an international environment.Apply nowAbout the departmentGlobal Information Security, GBS is an area within Digital Data and IT GBS. Global Information Security, GBS is an integral part of Global Information Security Organisation of Novo Nordisk which is headed by the CISO (Chief Information Security Officer) and consists of two main areas: Global Security Operations (GSO) and Global Information Security Advisory (GISA). In GIS (Global Information Security) GBS (Global Business service), we work closely with Global Information Security organisation in defining and executing on strategy to support the digital journey of Novo Nordisk by ensuring smooth operations of SOC, Service Management, advisory support, Security Mailbox, and various other Information Security functions.The PositionAs a Security Specialist you will be anchored to and support the Global Security Operations mission with Global Information Security. It includes,:Building and maintaining interfaces between enterprise applications, infrastructure, and security systems. Examples include ServiceNow, Splunk, Sentinel, Qlik, PowerBI, AWS, Azure, Anomali, Cribl, Trellix, and Tenable products and delivering Data analytics, detections, and automation through SIEM/SOAR content development (Alerts and Signatures) and maintenance.:Gathering and tracking stakeholder requirements and applying fundamental knowledge of Cloud IAAS/SAAS/PAAS platforms to deliver solutions for data collection and analysis and record, track, and maintain their work items within a work management system such as Azure DevOps.:Quickly learning the basics of new security platforms and discovering how to integrate with new data sources to configure, gather, and tune security alerts and events from diverse services and softwar and documenting their work and commenting code sufficiently to communicate critical details for others to pick up the work and develop further if required.:Metrics and reporting through maintenance of data connections, data warehousing, and reporting within BI Platforms.:Working independently with minimal guidance and oversight and is adept and finding solutions that often arise through the course of development and operations.Qualification:Bachelor's degree in computer science, cyber security, engineering, or other relevant fields.:6:8 years' experience within IT and relevant 4+years development experience as a primary responsibility.:Preference will be given to individuals experienced with JavaScript and Python.:Experience on SIEM/SOAR platforms is preferred:Fundamental knowledge of and experience working with Information Security is preferred.:Foundational Knowledge of Major operating systems in general, and experience developing and operating in Linux operating systems is a must.:Experience in Qlik Sense or Power BI and Azure or AWS is preferred.:Experience with source code version control systems and API integrations is required.:Experience with common platforms such as Office 365, ServiceNow, Azure DevOps is highly preferred.:Experience administering and building complex queries and working in an enterprise scale IT operations environment highly preferred.:Excellent communication in English and stakeholder management skills.Working at Novo NordiskWe are a proud life:science company, and life is our reason to exist. We're inspired by life in all its forms and shapes, ups and downs, opportunities, and c
Data Engineer
Amazon, Bengaluru, KA, IN
DESCRIPTIONAmazon Prime team is building an evergreen platform that will provide real time insights of traffic, sales, deals, prime engagement and is looking for rock start data engineer to build this. At Amazon Prime, understanding customer data is paramount to our success in providing customers with relevant and enticing benefits such as fast free shipping, instant videos, streaming music and free Kindle books in the US and international markets. At Amazon you will be working in one of the world's largest and most complex data environments.You will be part of team that will work with the marketing, retail, finance, analytics, machine learning and technology teams to provide real time data processing solution that give Amazon leadership, marketers, PMs timely, flexible and structured access to customer insights. The team will be responsible for building this platform end to end using latest AWS technologies and software development principles.As a Data Engineer , you will be responsible for leading the architecture, design and development of the data, metrics and reporting platform for Prime. You will architect and implement new and automated Business Intelligence solutions, including big data and new analytical capabilities that support our Development Engineers, Analysts and Retail business stakeholders with timely, actionable data, metrics and reports while satisfying scalability, reliability, accuracy, performance and budget goals and driving automation and operational efficiencies. You will partner with business leaders to drive strategy and prioritize projects and feature sets. You will also write and review business cases and drive the development process from design to release. In addition, you will provide technical leadership and mentoring for a team of highly capable Data Engineers.Responsibilities1. Own design and execution of end to end projects2. Own managing WW Prime core services data infrastructure3. Establish key relationships which span Amazon business units and Business Intelligence teams4. Implement standardized, automated operational and quality control processes to deliver accurate and timely data and reporting to meet or exceed SLAsWe are open to hiring candidates to work out of one of the following locations:Bangalore, KA, INDBASIC QUALIFICATIONS- 3+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with SQLPREFERRED QUALIFICATIONS- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
Data Engineer
, reference job description, IN
Position Title:((Data Engineer))We are passionate about food. But we're even more passionate about our PeopleWe are looking for a qualified Data Engineers to join our McCain Family As part of the McCain Family, you will be part of an amazing team doing great thingsJOB RESPONSIBILITIES::Design and develop solutions with minimal supervision that are in line with McCain standards and scalability using enterprise azure data platform.:Create and maintain optimal/reliable data pipelines to meet business needs.:Define and operate the infrastructure required for optimal extraction, transformation and loading (ETL) of data from a wide variety of data sources using various azure technologies as needed.:Design and implement life cycle management processes (DevOps) to enable continuous integration and continuous deployment (CICD) of data systems.:Create design documents and communicate the design to delivery teams.:Contribute to project planning by providing time and effort estimates for development and implementation tasks and deliverables:Be a catalyst for change and embrace challenges and new technology.:Manage assigned deliverables and timelines towards the successful delivery of projects.:Integrate data from various resources (including external data sources and IoT) and manage the big data as a key enterprise asset.:Create and maintain backend data solutions for data analysts and data scientists. Assist them in unlocking insight from enterprise data.:Identify data quality issue and make recommendation for addressing root causes.:Setup monitoring of data pipelines and work on incident resolution based on expected SLAs.:Work with stakeholders including product, data and architecture SMEs to assist with data:related technical issues and support their data infrastructure needs.:Ensure compliance to data architecture and security requirements from other domain owners.:Work with other domain SMEs and vendors e.g. Microsoft, to resolve data related incidentsMEASURES OF SUCCESS::Compliance to enterprise standards and best practices.:Solution performance (measured by response time and compute time):Incident reduction:High Availability:Data analysis, AI tools and models, and data algorithms.:Design and evolution of AI and data models.About You::University degree in computer science, information systems or relevant discipline:Strong experience delivering globally scalable solutions.:Experience with solution cost optimization.:A minimum of 5 years' experience in a similar role.:Knowledge of AI models, Agile / SCRUM project delivery, DevOps and CICD practices.:Efficient at performing root cause analysis to address issues and applying long:term fixes.:Good knowledge of Azure data services (Azure Data Factory, Synapse, Azure Data Lake Storage, Event Hub, Databricks, Cognitive Services, etc.):Good knowledge of Object:function scripting languages like Python and Java Script.OTHER INFORMATION:Travel: as required:Job is primarily performed in a standard office environment but working from home an option upon agreement with Manager and McCain PoliciesApply Now if you are looking to be part of a flourishing and energetic environment Join a recognized brand known throughout households across the globeMcCain Foods is an equal opportunity employer. We see value in ensuring we have a diverse, antiracist, inclusive, merit:based, and equitable workplace. As a global family:owned company we are proud to reflect the diverse communities around the world in which we live and work. We recognize that diversity drives our creativity, resilience, and success and makes our business stronger.McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meet
Data Engineer, IN Data Engineering & Analytics
Amazon, Bengaluru, Any
BASIC QUALIFICATIONS- 1+ years of data engineering experience- Experience with SQL- Experience with data modeling, warehousing and building ETL pipelines- Experience with one or more scripting language (e.g., Python, KornShell)- Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source projectDESCRIPTIONIN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build analytics solutions and manage the largest petabyte-scale data infrastructure in India and for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The team's charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data and analytics solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in being a strong owner of our Analytics & data platform. He/she will own and build automations and solutions to ensure the availability, efficiency, scaling, expansion, operations and compliance of the data and analytics platform that serves 200 + IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders.An ideal candidate will be someone with sound technical background in building data & analytics solutions, managing large data infrastructures, , working with petabyte-scale data, building scalable automations and driving operational excellence. An ideal candidate will be someone who is a self-starter that can start with a requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and 'gets work done' in business time.Key job responsibilities- Design/implement data & analytics solutions and manage our massive Data infrastructure to scale for the Analytics needs of Amazon IN.- Build solutions to achieve BAA(Best At Amazon) standards for data availability, consistency & compliance.- Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Amazon and AWS big data technologies- Drive operational excellence strongly within the team and build automations and mechanisms to reduce operations- Enable efficient data exploration, experimentation of large datasets on our platform and implement data access control mechanisms for stand-alone datasets- Design and implement scalable and cost effective data & analytics infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform- Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment.- Enjoy working closely with your peers in a group of very smart and talented engineers.A day in the lifeIndia Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights.We are open to hiring candidates to work out of one of the following locations:Bengaluru, KA, INDPREFERRED QUALIFICATIONS- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.- Knowledge of AWS InfrastructureSalary: . Date posted: 04/04/2024 09:32 AM
Data Engineer I, SCOT - AIM
Amazon, Hyderabad, Any
BASIC QUALIFICATIONS- 1+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)- Experience with one or more scripting language (e.g., Python, KornShell)DESCRIPTIONSCOT's Automated Inventory Management (AIM) team is seeking talented individuals who enjoy the challenge of diving into complex problems and solving the unsolvable to help our executives make impactful business decisions. AIM teams owns Availability , Speed, Inventory & Auditing at Scale metrics that measures & improves the overall customer experience. We regularly work directly with executive leadership to improve that experience.As a Data Engineer, you will analyze large amounts of business data, solve real world problems, and develop metrics and business cases that will enable us to continually delight our customers worldwide. In this role, you will work closely with a team of Business Intelligence Engineers, Data Engineers and Data Scientists to automate and architect data pipelines, and to make the data more actionable to manage business at scale. Successful candidates will bring strong technical abilities combined with a passion for delivering results for customers, internal and external.Key job responsibilities• Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance .• Build and improve Data solutions by translating business requirements into robust, scalable, and supportable solutions that work well within the overall system architecture.• Own the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards to drive key business decisions.• Monitor and troubleshoot operational or data issues in the data pipelines.We are open to hiring candidates to work out of one of the following locations:Bangalore, KA, IND | Hyderabad, TS, INDPREFERRED QUALIFICATIONS- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasetsSalary: . Date posted: 04/06/2024 09:32 AM
Data Engineer, Translation Services
Amazon, Hyderabad, Any
BASIC QUALIFICATIONS- 3+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with SQLDESCRIPTIONAre you passionate about data and code? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions that process billions of records and supports handling of translations data using AWS technologies? Do you want to create the next-generation product for self-service data transformation? If so, Translation Services team is for you! The successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and customer obsessed about ensuring secondary language experiences are indistinguishable from primary language experiences at Amazon scale.Key job responsibilitiesAs a Data Engineer, you would own the architecture of DW solutions for the enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use case. You would use your business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data, and use it to deliver the data as service which will have an immediate influence on day-to-day decision making.A day in the lifeThe TS data analytics team processes data on billions of words translated per week and transforms it so that TS teams can use it for ML/LLM model training, quality audits for machine and human translators, invoicing, and data insights for tech and program teams. We divide our team between efforts to rearchitect 2 large data warehousing systems and implementing customer feature requests.About the teamOur vision is: Language will not stand in the way of anyone on earth using Amazon products and services. Our mission statement is: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.We are open to hiring candidates to work out of one of the following locations:Hyderabad, TS, INDPREFERRED QUALIFICATIONS- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)Salary: . Date posted: 04/10/2024 09:20 AM
Data Engineer II - Amazon Music, DISCO CPT
Amazon, Bengaluru, KA, IN
DESCRIPTIONAmazon Music is awash in data! To help make sense of it all, the DISCO (Data, Insights, Science & Optimization) team: (i) enables the Consumer Product Tech org make data driven decisions that improve the customer retention, engagement and experience on Amazon Music. We build and maintain automated self-service data solutions, data science models and deep dive difficult questions that provide actionable insights. We also enable measurement, personalization and experimentation by operating key data programs ranging from attribution pipelines, northstar weblabs metrics to causal frameworks. (ii) delivering exceptional Analytics & Science infrastructure for DISCO teams, fostering a data-driven approach to insights and decision making. As platform builders, we are committed to constructing flexible, reliable, and scalable solutions to empower our customers. (iii) accelerates and facilitates content analytics and provides independence to generate valuable insights in a fast, agile, and accurate way. This domain provides analytical support for the Consumer Product Tech org to make data driven decisions while launching new features and evaluating existing features with the end goal of improving the customer experience. DISCO team enables repeatable, easy, in depth analysis of music customer behaviors. We reduce the cost in time and effort of analysis, data set building, model building, and user segmentation. Our goal is to empower all teams at Amazon Music to make data driven decisions and effectively measure their results by providing high quality, high availability data, and democratized data access through self-service tools.If you love the challenges that come with big data then this role is for you. We collect billions of events a day, manage petabyte scale data on Redshift and S3, and develop data pipelines using Spark/Scala EMR, SQL based ETL, Airflow services.We are looking for talented, enthusiastic, and detail-oriented Data Engineer, who knows how to take on big data challenges in an agile way. Duties include big data design and analysis, data modeling, and development, deployment, and operations of big data pipelines. You'll help build Amazon Music's most important data pipelines and data sets, and expand self-service data knowledge and capabilities through an Amazon Music data university.DISCO team develops data specifically for a set of key business domains like personalization and marketing and provides and protects a robust self-service core data experience for all internal customers. We deal in AWS technologies like Redshift, S3, EMR, EC2, DynamoDB, Kinesis Firehose, and Lambda. Your team will manage the data exchange store (Data Lake) and EMR/Spark processing layer using Airflow as orchestrator. You'll build our data university and partner with Product, Marketing, BI, and ML teams to build new behavioural events, pipelines, datasets, models, and reporting to support their initiatives. You'll also continue to develop big data pipelines.Key job responsibilitiesYou will work with Product Managers, Data scientists and other Data Engineers to help design, develop and deliver scalable data analytics platform and data pipeline solutions to support various Science, ML initiatives and at the scale and speed of Amazon Music. In addition, you will help design, develop, and deliver components for the analytics platform at the broader org level and streamline/automate workflows for the broader DISCO organization. What You’ll Do:-Collaborate with cross-functional teams, including data scientists, data scientists, business intelligence engineers, to design and architect a modern data analytics platform on AWS, utilizing the AWS Cloud Development Kit (CDK).-Develop robust and scalable data pipelines using SQL/PySpark/Airflow to efficiently ingest, process, and transform large volumes of data from various sources into a structured format, ensuring data quality and integrity.-Design and implement an efficient and scalable data warehousing solution on AWS, using appropriate NoSQL/SQL storage and database technologies for structured and unstructured data.-Automate ETL/ELT processes to streamline data integration from diverse data sources and ensure the platform's reliability and efficiency. -Create data models to support business intelligence, providing actionable insights and interactive reports to end-users.-Enable advanced analytics and machine learning capabilities within the platform to derive predictive and prescriptive insights from the data through tools like EMR/SageMaker Notebooks-Continuously monitor and optimize the performance of data pipelines, databases, and applications, ensuring low-latency data access for analytics and machine learning tasks.-Implement robust security measures and ensure data compliance with internal requirements, industry standards, and regulations to safeguard sensitive information.-Work closely with data scientists and business intelligence engineers to understand their requirements and collaborate on data-related projects.-Create comprehensive technical documentation for the platform's architecture, data models, and APIs to facilitate knowledge sharing and maintainability.About the teamAmazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale.We are open to hiring candidates to work out of one of the following locations:Bangalore, KA, INDBASIC QUALIFICATIONS- Experience with data modeling, warehousing and building ETL pipelines- 3+ years of data engineering experience- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience- Bachelor's degree in Computer Science, Data Science, or a related field.- Experience working with Amazon Web Services (AWS) and proficiency in leveraging various AWS services for data storage, processing, and analytics.- Solid programming skills in SQL for ETL/ELT jobs.- Strong programming skills in Python for data processing, ETL, and scripting tasks.- Familiarity with big data technologies such as Apache Spark, Apache Hadoop, or AWS Elastic MapReduce (EMR).- Solid understanding of database management systems, both relational and NoSQL, and expertise in query optimization and database performance tuning.- Excellent problem-solving and analytical skills with the ability to resolve complex data engineering challenges.- Strong communication and collaboration skills, with a demonstrated ability to work effectively in a team-oriented environment.- Experience in Unix- Experience in Troubleshooting the issues related to Data and Infrastructure issues.PREFERRED QUALIFICATIONS- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS- Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam- Knowledge of distributed systems as it pertains to data storage and computing- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience in building or administering reporting/analytics platforms- Master's degree in Computer Science, Data Science, or a related field.- Minimum of 5+ years of professional experience in data engineering roles, with a strong track record of building data analytics platforms.- Extensive experience working with Amazon Web Services (AWS) and proficiency in leveraging various AWS services for data storage, processing, and analytics.- Hands-on experience with AWS Cloud Development Kit (CDK) and Typescript to build infrastructure as code (IaC) for AWS resources.- Familiarity with big data technologies such as Apache Spark, Apache Hadoop, or AWS Elastic MapReduce (EMR).- Solid understanding of database management systems, both relational and NoSQL, and expertise in query optimization and database performance tuning.- Proficiency in data modeling and designing efficient data structures for analytical workloads.- Experience with CI/CD pipelines and a strong DevOps mindset to ensure continuous integration and delivery.- Excellent problem-solving and analytical skills with the ability to resolve complex data engineering challenges.- Strong communication and collaboration skills, with a demonstrated ability to work effectively in a team-oriented environment.
Data Engineer II, Analytics & Data Management (ADM)
Amazon, Bengaluru, Any
BASIC QUALIFICATIONS- 3+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelinesDESCRIPTIONAmazon was recently voted #2 most admired company in the US, #1 most innovative, and # 1 in Customer Service. We are investing heavily in building an excellent advertising business, and are responsible for defining, and delivering a collection of self-service performance advertising products - "always-on analytics" that is fully scalable and reliable. Our products are strategically important to our leadership, finance, economists, analysts, and BI partners to drive long-term growth. We mine billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative, and fun loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities.The Advertising Analytics and Data Management team is looking for an exceptional Data Engineer who is passionate about data and the insights that large amounts of data can provide, who thinks/acts globally, and who has the ability to contribute to major novel innovations in the industry. The role will focus on working with a team of data engineers, business and tech savvy professionals to lay down scalable data architecture to ingest large amounts of structured and unstructured datasets and work with stakeholders to drive business decisions based on these datasets. The ideal candidate will possess both a data engineering background and a strong business acumen that enables him/her to think strategically and add value to the customer experience. He/She will experience a wide range of problem solving situations, requiring extensive use of data collection and analysis techniques such as data mining and machine learning. The successful candidate will work with multiple global site leaders, Business Analysts, Software Developers, Database Engineers, Product Management in addition to stakeholders in sales, finance, marketing and service teams to create a coherent customer view. They will: - Develop and improve the current data architecture using AWS Redshift, AWS S3, AWS Aurora (Postgres) and Hadoop/EMR. - Improve upon the data ingestion models, ETL jobs, and alarming to maintain data integrity and data availability. - Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of advertiser experience. - Partner with analysts across teams such as product management, operations, sales, finance, marketing and engineering to build and verify hypothesis to improve the business performance. - Manage weekly business reports via dashboards and paper the analyses of daily, weekly, and monthly reporting of performance via Key Performance Indicators.We are open to hiring candidates to work out of one of the following locations:Bengaluru, KA, INDPREFERRED QUALIFICATIONS- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)Salary: . Date posted: 04/16/2024 10:15 PM
Data Engineer - Snowflake - Pune
Michael Page, Pune
Total 4-6 years of Intensive working knowledge of Snowflake SQL Scripting.Working knowledge of PL/SQL and JavaScript with good knowledge of creating Stored Procedures, Functions and ViewsGood knowledge in Snowflake architecture concepts - Cloud Services, Virtual Warehouse, StorageGood exposure in performance tuning, optimization on snowflake.Basic knowledge of SAP BWGood functional business-oriented mindset as for this role the level of interactions with business users and key users.Masters/bachelor's degree in computer science or IT.Very good English skills.Very good intercultural communication and collaboration skills with minimum 2 years of experience in international collaboration.Organizational/planning skills with the ability to coordinate and manage multiple complex projects and organize workloads in a structured way within tight deadlines.Critical thinking skills and ability to work under pressure.Organized approach to solve problem with good decision-making skills.