We use cookies to improve the user experience, analyze traffic and display relevant ads.
Details Accept
Enter position

Big Data Engineer Salary in India - PayScale

Receive statistics information by mail
Unfortunately, there are no statistics for this request. Try changing your position or region.

Найдите подходящую статистику

Analog Design Engineer

Смотреть статистику

ASIC Verification Engineer

Смотреть статистику

Citrix Engineer

Смотреть статистику

Data Engineer

Смотреть статистику

Electrical Site Engineer

Смотреть статистику

ICT Engineer

Смотреть статистику

Ios Engineer

Смотреть статистику

It Support Engineer

Смотреть статистику

IT Test Data Engineer

Смотреть статистику

L2 Engineer

Смотреть статистику

L2 Support Engineer

Смотреть статистику

Linux Engineer

Смотреть статистику

Mechanical Site Engineer

Смотреть статистику

Middleware Engineer

Смотреть статистику

Network Operations Engineer

Смотреть статистику

Noc Engineer

Смотреть статистику

Plc Engineer

Смотреть статистику

Rtl Design Engineer

Смотреть статистику

Senior Database Engineer

Смотреть статистику

Senior Network Engineer

Смотреть статистику

Senior Service Engineer

Смотреть статистику

Senior Verification Engineer

Смотреть статистику

Signal Integrity Engineer

Смотреть статистику

Software Configuration Engineer

Смотреть статистику

Technical Solutions Engineer

Смотреть статистику

Technical Support Analyst

Смотреть статистику

Test Data Engineer

Смотреть статистику

Utility Engineer

Смотреть статистику

Verification Engineer

Смотреть статистику

Web Engineer

Смотреть статистику
Show more

Recommended vacancies

Data Engineer III-Big Data
JPMorgan Chase, Bengaluru, Any
Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.As a Data Engineer III-Big Data at JPMorgan Chase within the Corporate & Investment Bank Payments Technology Team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm's business objectives.Job responsibilities Supports review of controls to ensure sufficient protection of enterprise data Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Adds to team culture of diversity, equity, inclusion, and respectRequired qualifications, capabilities, and skills Formal training or certification on data lifecycle concepts and 3+ years applied experience Experience across the data lifecycle Experience with Batch and Real time Data processing with Spark or Flink. Working knowledge of AWS Glue and EMR usage for Data processing. Experience working with Databricks. Experience working with Python/Java, PySpark etc., Advanced at SQL (e.g., joins and aggregations) Working understanding of NoSQL databases Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis.Preferred qualifications, capabilities, and skills Worked with building Data lake, built Data platforms, built Data frameworks, Built/Design of Data as a Service APIAbout usJPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world's most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.About the TeamThe Corporate & Investment Bank is a global leader across investment banking, wholesale payments, markets and securities services. The world's most important corporations, governments and institutions entrust us with their business in more than 100 countries. We provide strategic advice, raise capital, manage risk and extend liquidity in markets around the world.Salary: . Date posted: 03/26/2024 10:23 PM
Data Engineer - Asia, Middle East & Africa Data Solutions
Procter & Gamble, Mumbai, Any
Job LocationMumbaiJob DescriptionThis role reports to Director, Platform Engineering Lead, P&G (AMA) APAC, Middle East, and Africa markets. Your team About the AMA Data Solutions & Engineering Team: We take pride in managing the most-valuable asset of company in Digital World, called Data. Our vision is to deliver Data as a competitive advantage for AMA Business, by building unified data platforms, delivering customized BI tools for managers & empowering insightful business decisions through AI in Data.In this role, you'll be constantly learning, staying up to date with industry trends and emerging technologies in data solutions. You'll have the chance to work with a variety of tools and technologies, including big data platforms, machine learning frameworks and data visualization tools to build innovative and effective solutions.So, if you're excited about the possibilities of data, and eager to make a real impact in the world of business, a career in data solutions might be just what you're looking for. Join us and become a part of the future of digital transformation. Click Here to Hear From the Functional Leader About P&G IT:Digital is at the core of P&G's accelerated growth strategy. With this vision, IT in P&G is deeply embedded into every critical process across business organizations comprising 11+ category units globally creating impactful value through Transformation, Simplification & Innovation. IT in P&G is sub-divided into teams that engage strongly for revolutionizing the business processes to deliver exceptional value & growth - Digital GTM, Digital Manufacturing, Marketing Technologist, Ecommerce, Data Sciences & Analytics, Data Solutions & Engineering, Product Supply. Responsibilities of the role Understand the business requirements and convert into technical design of data pipelines and data modelsWrite code to ingest, transform and harmonize raw data into usable refined modelsAnalyze multiple data sets associated with the use cases in-scope in order to effectively design and develop the most optimal data models and transformationCraft integrated systems, implementing ELT/ ETL jobs to fulfil business deliverables.Performing sophisticated data operations such as data orchestration, transformation, and visualization with large datasets.Coordinate with data asset managers, architects, and development team to ensure that the solution is fit for use and are meeting vital architectural requirements.Demonstrate standard coding practices to ensure delivery excellence and reusabilityJob Qualifications Role Requirements At least 3 years of experience in Data EngineeringHands-on experience in building data models, data pipelines, data ingestion, harmonization along with data governance.Hands-on experience in scripting language like Python, R or ScalaBackend Development expertise on SQL Database, SQL Data Warehouse or any data warehousing solutions in cloudHands-on experience of reporting tools like Power BI or TableauKnowledge in DevOps Tools and CICD tools (e.g. Azure DevOps and Github)Knowledge in cloud technologies (Azure Cloud) - at least 2 years inclusive of software engineering experienceKnowledge in Agile or Scrum methodologies with proven track record of successful projectsGraduate of Engineering or IT related courseAbout us P&G was founded over 185 years ago as a simple soap and candle company. Today, we're the world's largest consumer goods company and home to iconic, trusted brands that make life a little bit easier in small but meaningful ways. We've spanned three centuries thanks to three simple ideas: leadership, innovation and citizenship. The insight, innovation and passion of hardworking teams has helped us grow into a global company that is governed responsibly and ethically, that is open and transparent, and that supports good causes and protects the environment. This is a place where you can be proud to work and do something that mattersDedication from us:You'll be at the core of breakthrough innovations, be given exciting assignments, lead initiatives, and take ownership and responsibility, in creative work spaces where new ideas flourish. All the while, you'll receive outstanding training to help you become a leader in your field. It is not just about what you'll do, but how you'll feel: encouraged, valued, purposeful, challenged, heard, and inspired.What we offer:Continuous mentorship - you will collaborate with passionate peers and receive both formal training as well as day-to-day mentoring from your manager dynamic and supportive work environment- employees are at the centre, we value every individual and support initiatives, promoting agility and work/life balance.Just so you know:We are an equal opportunity employer and value diversity at our company. Our mission of diversity and inclusion is: "everyone valued. Everyone included. Everyone performing at their peak"Job ScheduleFull timeJob NumberR000102046Job SegmentationExperienced Professionals (Job Segmentation)Salary: . Date posted: 03/26/2024 09:29 AM
Data Engineer, Amazon
Amazon, Bengaluru, KA, IN
DESCRIPTIONAmazon’s Consumer Payments organization is seeking a highly quantitative, experienced Business Intelligence Engineer to drive the development of analytics and insights. You will succeed in this role if you are an organized self-starter who can learn new technologies quickly and excel in a fast-paced environment. In this position, you will be a key contributor and sparring partner, developing analytics and insights that global executive management teams and business leaders will use to define global strategies and deep dive businesses.Our team offers a unique opportunity to build a new set of analytical experiences from the ground up. You will be part the team that is focused on acquiring new merchants from around the world to payments around the world. The position is based in India but will interact with global leaders and teams in Europe, Japan, US, and other regions. You should be highly analytical, resourceful, customer focused, team oriented, and have an ability to work independently under time constraints to meet deadlines. You will be comfortable thinking big and diving deep. A proven track record in taking on end-to-end ownership and successfully delivering results in a fast-paced, dynamic business environment is strongly preferred.Key job responsibilities Design, develop, implement, test, and operate large-scale, high-volume, high-performance data structures for analytics and Reporting. Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, AWS – Redshift, and OLAP technologies, Model data and metadata for ad hoc and pre-built reporting. Work with product tech teams and build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Interface with business customers, gathering requirements and delivering complete reporting solutions. Collaborate with Analysts, Business Intelligence Engineers and Product Managers to implement algorithms that exploit rich data sets for statistical analysis, and machine learning. Participate in strategic & tactical planning discussions, including annual budget processes. Communicate effectively with product/business/tech-teams/other Data teams. We are open to hiring candidates to work out of one of the following locations:Bengaluru, KA, INDBASIC QUALIFICATIONS- 1+ years of data engineering experience- Experience with SQL- Experience with data modeling, warehousing and building ETL pipelines- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)- Experience with one or more scripting language (e.g., Python, KornShell)PREFERRED QUALIFICATIONS- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
Data Engineer
, bangalore, IN
Novo Nordiskis one of the world's most successful pharmaceutical companies and with a strong growth potential. Digital, Data and IT Solutions is the unit responsible for several systems at Novo Nordisk including the operation, maintenance, support, and development of these systems. Our people have a unique combination of IT insight and the ability to navigate many agendas and stakeholders.Does your motivation come from challenges and working in a dynamic environment? Are you enthusiastic about understanding the business and bringing attention to key business challenges? Then we might have the right position for you. Apply NowAbout the departmentGlobal IT India was established in February 2010 as an integral part of Global IT in HQ. From the beginning in 2010, the unit has grown to 500+ employees till date. The main role of Global IT India is to manage IT, which includes System Management, Project Management, Infrastructure Management, Compliance Management, Security Management, Process Management and Vendor Management.Commercial IT department is an integral part of the Global IT Organization within GBS India and is responsible for delivering projects and managing systems for Sales, Marketing, Medical and Market Access teams. In Global IT, we drive projects, manage critical IT systems, services and infrastructure used along the entire value chain of Novo Nordisk.The PositionAs Data engineer, you will have the opportunity: ::Applying strong technical experience building highly reliable services on managing and orchestrating a multi:petabyte scale data lake. You will have to work in a fast:paced environment.:You are responsible to take vague requirements and transform them into solid solutions. and will work on solving challenging problems, where creativity is as crucial as your ability to write code and test cases.:You will join our International Operations Data Engineering team which is responsible for building our data lake components, maintaining our big data pipelines and services and work directly with the business stakeholders, project, and platform teams to enable growth and support our field and back:office users in International Operations and Commercial Strategy and Corporate Affairs (CSCA).:You will help our stakeholder teams ingest data faster into our data lake and find ways to make our data pipelines more efficient, or even come up ideas to help instigate self:serve data engineering within the setup.:You will be responsible to building micro:services, architecting, designing, and enabling self:serve capabilities at scale.Qualifications:You are an ideal candidate if you meet following requisites.:You hold bachelors or master's degree in computer science, information technology, engineering, mathematics, or related field.:Should have 5 years of experience in Data and Analytics and overseen end:to:end implementation of data pipelines on cloud:based data platforms.:Experience in Snowflake and Knowledge in transforming data using Data Build Tool.:Strong programming skills in Python, PySpark and some combination Java, Scala preferred).:Experience in AWS and API Integration/Integration in General with Knowledge of data warehousing concepts.:You have project/Internship experience writing SQL, structuring data, and data storage practices.:Project/Internship experience of working with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data .:Project/Internship experience working on Amazon Web Services (in particular, using EMR, Kinesis, RDS, S3, SQS, Lambda, Glue, Databricks, Redshift, Athena and the like).:Support use of data engineering toolsets (e.g. Python (especially PySpark), SQL, Notebooks).:You have been involved assembling large, complex unstructured data sets that meet functional / non:functional business requirements.:You ha
Big Data Engineer (Python, Kafka, Spark)
NetApp, Bangalore, Any
About NetApp We're forward-thinking technology people with heart. We make our own rules, drive our own opportunities, and try to approach every challenge with fresh eyes. Of course, we can't do it alone. We know when to ask for help, collaborate with others, and partner with smart people. We embrace diversity and openness because it's in our DNA. We push limits and reward great ideas. What is your great idea? "At NetApp, we fully embrace and advance a diverse, inclusive global workforce with a culture of belonging that leverages the backgrounds and perspectives of all employees, customers, partners, and communities to foster a higher performing organization." -George Kurian, CEOJob SummaryAs a Software Engineer at NetApp India's R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this "actionable intelligence" You will be working closely with a team of senior software developers and a technical director. You will be responsible for contributing to the design, and development and testing of code. The software applications you build will be used by our internal product teams, partners, and customers. We are looking for a hands-on lead engineer who is familiar with Spark and Scala, Java and/or Python. Any cloud experience is a plus. You should be passionate about learning, be creative and have the ability to work with and mentor junior engineers.Job RequirementsYour Responsibility • Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Build and deploy products both on-premises and in the cloud • Work on technologies related to NoSQL, SQL and in-memory databases • Develop and implement best-in-class monitoring processes to enable data applications meet SLAs • Should be able to mentor junior engineers technically. • Conduct code reviews to ensure code quality, consistency and best practices adherence.Our Ideal Candidate • You have a deep interest and passion for technology • You love to code. An ideal candidate has a github repo that demonstrates coding proficiency • You have strong problem solving, and excellent communication skills • You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilitiesEducation• 5+ years of Big Data hands-on development experience • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala • Proven, working expertise with Big Data Technologies Hadoop, HDFS, Hive, Spark Scala/Spark, and SQL • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantageDid you know... Statistics show women apply to jobs only when they're 100% qualified. But no one is 100% qualified. We encourage you to shift the trend and apply anyway! We look forward to hearing from you. Why NetApp? In a world full of generalists, NetApp is a specialist. No one knows how to elevate the world's biggest clouds like NetApp. We are data-driven and empowered to innovate. Trust, integrity, and teamwork all combine to make a difference for our customers, partners, and communities. We expect a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off per year to volunteer with their favorite organizations. We provide comprehensive medical, dental, wellness, and vision plans for you and your family. We offer educational assistance, legal services, and access to discounts. We also offer financial savings programs to help you plan for your future. If you run toward knowledge and problem-solving, join us.Salary: . Date posted: 03/21/2024 03:04 PM
Big Data Engineer (Python, Spark, Kafka)
NetApp, Bangalore, Any
About NetApp We're forward-thinking technology people with heart. We make our own rules, drive our own opportunities, and try to approach every challenge with fresh eyes. Of course, we can't do it alone. We know when to ask for help, collaborate with others, and partner with smart people. We embrace diversity and openness because it's in our DNA. We push limits and reward great ideas. What is your great idea? "At NetApp, we fully embrace and advance a diverse, inclusive global workforce with a culture of belonging that leverages the backgrounds and perspectives of all employees, customers, partners, and communities to foster a higher performing organization." -George Kurian, CEOJob SummaryAs a Software Engineer at NetApp India's R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this "actionable intelligence" You will be working closely with a team of senior software developers and a technical director. You will be responsible for contributing to the design, and development and testing of code. The software applications you build will be used by our internal product teams, partners, and customers. We are looking for a hands-on lead engineer who is familiar with Spark and Scala, Java and/or Python. Any cloud experience is a plus. You should be passionate about learning, be creative and have the ability to work with and mentor junior engineers.Job Requirements• Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Build and deploy products both on-premises and in the cloud • Work on technologies related to NoSQL, SQL and in-memory databases • Develop and implement best-in-class monitoring processes to enable data applications meet SLAs • Should be able to mentor junior engineers technically. • Conduct code reviews to ensure code quality, consistency and best practices adherence.Our Ideal Candidate • You have a deep interest and passion for technology • You love to code. An ideal candidate has a github repo that demonstrates coding proficiency • You have strong problem solving, and excellent communication skills • You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilitiesEducation• 5+ years of Big Data hands-on development experience • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala • Proven, working expertise with Big Data Technologies Hadoop, HDFS, Hive, Spark Scala/Spark, and SQL • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantageDid you know... Statistics show women apply to jobs only when they're 100% qualified. But no one is 100% qualified. We encourage you to shift the trend and apply anyway! We look forward to hearing from you. Why NetApp? In a world full of generalists, NetApp is a specialist. No one knows how to elevate the world's biggest clouds like NetApp. We are data-driven and empowered to innovate. Trust, integrity, and teamwork all combine to make a difference for our customers, partners, and communities. We expect a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off per year to volunteer with their favorite organizations. We provide comprehensive medical, dental, wellness, and vision plans for you and your family. We offer educational assistance, legal services, and access to discounts. We also offer financial savings programs to help you plan for your future. If you run toward knowledge and problem-solving, join us.Salary: . Date posted: 03/21/2024 03:04 PM
Data Engineer II, Music Data Lake
Amazon, Bengaluru, KA, IN
DESCRIPTIONAmazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale. If you love the challenges that come with big data then this role is for you. We collect billions of events a day, manage petabyte scale data on Redshift and S3, and develop data pipelines using Spark/Scala EMR, SQL based ETL, and Java services.You are a talented, enthusiastic, and detail-oriented Data Engineer, Data Science, Business Intelligence, or Software Development who knows how to take on big data challenges in an agile way. Duties include big data design and analysis, data modeling, and development, deployment, and operations of big data pipelines. You will also help hire, mentor, and develop peers in the the Music Data Experience team including Data Scientists, Data Engineers, and Software Engineers. You'll help build Amazon Music's most important data pipelines and data sets, and expand self-service data knowledge and capabilities through an Amazon Music data university.This role requires you to live at the cross section of data and engineering. You have a deep understanding of data, analytical techniques, and how to connect insights to the business, and you have practical experience in insisting on highest standards on operations in ETL and big data pipelines. With our Amazon Music Unlimited and Prime Music services, and our top music provider spot on the Alexa platform, providing high quality, high availability data to our internal customers is critical to our customer experiences. Music Data Experience team develops data specifically for a set of key business domains like personalization and marketing and provides and protects a robust self-service core data experience for all internal customers. We deal in AWS technologies like Redshift, S3, EMR, EC2, DynamoDB, Kinesis Firehose, and Lambda. In 2020 your team will migrate Amazon Music's information model and data pipelines to a data exchange store (Data Lake) and EMR/Spark processing layer. You'll build our data university and partner with Product, Marketing, BI, and ML teams to build new behavioral events, pipelines, datasets, models, and reporting to support their initiatives. You'll also continue to develop big data pipelines.Key job responsibilitiesBuild Data Platform and Data Lake solutionsBuild Data Engineering toolsBuild real time and micro batch data pipelinesBuild and manage Data PipelinesAbout the teamThe Music Data eXperience (MDX) team is responsible for the definition, design, production, and quality of foundational datasets consumed by the whole org, data management tools, and the self-service data lake and warehouse platforms on which these datasets are published, stored, shared, and consumed for analytics and science modeling. MDX is split into two sub teams *PARAM* (Platform Architecture Research and AutoMation) and *IDEA* (Intelligence, Data Engineering & Analytics). Data Platform (PARAM) team owns the self-service data lake Data EXchange Store (DEX) and Data Warehouse platforms, builds tools and frameworks for efficient data management, and owns the orchestration and configuration platform for data pipelines. Data Engineering (IDEA) Team owns the foundational data model and datasets, the Spark and Datanet ETL jobs and business logic to build them, away team support for datasets, org wide launch support (when required), the Executive Daily Summary (EDS), and future batch dataset data quality frameworks. We are open to hiring candidates to work out of one of the following locations:Bangalore, KA, INDBASIC QUALIFICATIONS- 3+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with SQLPREFERRED QUALIFICATIONS- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
Data Engineer
, reference job description, IN
Position Title:((Data Engineer))We are passionate about food. But we're even more passionate about our PeopleWe are looking for a qualified Data Engineers to join our McCain Family As part of the McCain Family, you will be part of an amazing team doing great thingsJOB RESPONSIBILITIES::Design and develop solutions with minimal supervision that are in line with McCain standards and scalability using enterprise azure data platform.:Create and maintain optimal/reliable data pipelines to meet business needs.:Define and operate the infrastructure required for optimal extraction, transformation and loading (ETL) of data from a wide variety of data sources using various azure technologies as needed.:Design and implement life cycle management processes (DevOps) to enable continuous integration and continuous deployment (CICD) of data systems.:Create design documents and communicate the design to delivery teams.:Contribute to project planning by providing time and effort estimates for development and implementation tasks and deliverables:Be a catalyst for change and embrace challenges and new technology.:Manage assigned deliverables and timelines towards the successful delivery of projects.:Integrate data from various resources (including external data sources and IoT) and manage the big data as a key enterprise asset.:Create and maintain backend data solutions for data analysts and data scientists. Assist them in unlocking insight from enterprise data.:Identify data quality issue and make recommendation for addressing root causes.:Setup monitoring of data pipelines and work on incident resolution based on expected SLAs.:Work with stakeholders including product, data and architecture SMEs to assist with data:related technical issues and support their data infrastructure needs.:Ensure compliance to data architecture and security requirements from other domain owners.:Work with other domain SMEs and vendors e.g. Microsoft, to resolve data related incidentsMEASURES OF SUCCESS::Compliance to enterprise standards and best practices.:Solution performance (measured by response time and compute time):Incident reduction:High Availability:Data analysis, AI tools and models, and data algorithms.:Design and evolution of AI and data models.About You::University degree in computer science, information systems or relevant discipline:Strong experience delivering globally scalable solutions.:Experience with solution cost optimization.:A minimum of 5 years' experience in a similar role.:Knowledge of AI models, Agile / SCRUM project delivery, DevOps and CICD practices.:Efficient at performing root cause analysis to address issues and applying long:term fixes.:Good knowledge of Azure data services (Azure Data Factory, Synapse, Azure Data Lake Storage, Event Hub, Databricks, Cognitive Services, etc.):Good knowledge of Object:function scripting languages like Python and Java Script.OTHER INFORMATION:Travel: as required:Job is primarily performed in a standard office environment but working from home an option upon agreement with Manager and McCain PoliciesApply Now if you are looking to be part of a flourishing and energetic environment Join a recognized brand known throughout households across the globeMcCain Foods is an equal opportunity employer. We see value in ensuring we have a diverse, antiracist, inclusive, merit:based, and equitable workplace. As a global family:owned company we are proud to reflect the diverse communities around the world in which we live and work. We recognize that diversity drives our creativity, resilience, and success and makes our business stronger.McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meet
Big Data Analytics Training Institute in BTM, Bangalore, Marathahalli
, Bangalore
Get Big data analytics training in Bangalore, BTM, Marathahalli offered by Upshot Technologies. Learn Best Big Data Analytics certification course in Bangalore BTM Layout, Marathahalli. Getting into Big Data Analytics profession doesn’t require any set o
Big Data Analytics Training Institute in BTM, Bangalore, Marathahalli
, Bangalore
Best data analytics training in Bangalore, BTM Upshot Technologies, provides real-time and placement oriented Bigdata Training Programs in Bangalore. The emergence of the ecommerce industry has brought with it a whole new dimension to the importance of u