We use cookies to improve the user experience, analyze traffic and display relevant ads.
Details Accept
Enter position

Data Engineer Salary in Hyderabad - PayScale

Receive statistics information by mail
Unfortunately, there are no statistics for this request. Try changing your position or region.

Найдите подходящую статистику

Analog Design Engineer

Смотреть статистику

ASIC Verification Engineer

Смотреть статистику

Big Data Engineer

Смотреть статистику

Citrix Engineer

Смотреть статистику

Electrical Site Engineer

Смотреть статистику

ICT Engineer

Смотреть статистику

Ios Engineer

Смотреть статистику

It Support Engineer

Смотреть статистику

IT Test Data Engineer

Смотреть статистику

L2 Engineer

Смотреть статистику

L2 Support Engineer

Смотреть статистику

Linux Engineer

Смотреть статистику

Mechanical Site Engineer

Смотреть статистику

Middleware Engineer

Смотреть статистику

Network Operations Engineer

Смотреть статистику

Noc Engineer

Смотреть статистику

Plc Engineer

Смотреть статистику

Rtl Design Engineer

Смотреть статистику

Senior Database Engineer

Смотреть статистику

Senior Network Engineer

Смотреть статистику

Senior Service Engineer

Смотреть статистику

Senior Verification Engineer

Смотреть статистику

Signal Integrity Engineer

Смотреть статистику

Software Configuration Engineer

Смотреть статистику

Technical Solutions Engineer

Смотреть статистику

Technical Support Analyst

Смотреть статистику

Test Data Engineer

Смотреть статистику

Utility Engineer

Смотреть статистику

Verification Engineer

Смотреть статистику

Web Engineer

Смотреть статистику
Show more

Recommended vacancies

Data Engineer, AB Data Analytics and Insights
Amazon, Hyderabad, TS, IN
DESCRIPTION Amazon Business Data Analytics and Insights (ABDAI) team is looking for a Data Engineer (DE) to play a significant role in building their large-scale, high-volume, high-performance data integration and delivery services. These data solutions would be primarily used in periodic reporting, and drive business decision making while dealing efficiently with the massive scale of data available through our Data Warehouse as well as our software systems. You will be responsible for designing and implementing solutions using third-party and in-house reporting tools, modelling metadata, building reports and dashboards, and administering the platform software. You are expected to build efficient, flexible, extensible, and scalable data models, ETL designs and data integration services. You are required to support and manage growth of these data solutions. You must be a self-starter and be able to learn on the go. Excellent written and verbal communication skills are required as you will work very closely with diverse teams. As a Data Engineer in ABDAI team you will be working in one of the world's largest cloud-based data lakes. You should be skilled in the architecture of data warehouse solutions for the Enterprise using multiple platforms (EMR, RDBMS, Columnar, Cloud). You should have extensive experience in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. About the teamAmazon Business Data Analytics and Insights (ABDAI) has two missions; (1) provide data that is accurate and reliable to accelerate business insights and data driven innovation in trustworthy, intuitive, and cost-efficient ways (2) predict and value customer actions for our business partners to be right a lot when taking decisions. ABDAI- Data Engineering team ensures that we have the right inputs to measure our business performance. Data is the voice of our customers and we source it from hundreds of AB and Non AB platform/systems as well as 3P applications that customers interact with. We own curated source of truth datasets and infrastructure to AB users WW and access to our data to external consumers through secure means. We power outreach campaigns for Sales, Marketing and Product teams through the HOTW data integrations we built various 3rd party application that AB has adopted for our needs. We are open to hiring candidates to work out of one of the following locations:Hyderabad, TS, INDBASIC QUALIFICATIONS- 3+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with SQLPREFERRED QUALIFICATIONS- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
Sr. Data Engineer
Discovery, Inc. (Formerly Scripps Networks Interactive), Hyderabad, Any
Every great story has a new beginning, and yours starts here. Welcome to Warner Bros. Discovery... the stuff dreams are made of. Who We Are... When we say, "the stuff dreams are made of," we're not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD's vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what's next...From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Your New Role :This role will work with a fast-paced team to create the data technology stack that can be used to deliver end-user insights used for developing the next generation of digital products and product strategies. You'll also help build out an internal data pipeline and develop ways to optimize data technologies to shape our digital data strategy. Your Role Accountabilities: Gain an understanding of brand and enterprise data warehouse/reporting requirements that will inform the data architecture and modeling approach. Employ these learnings to construct appropriate ETL/ELT processes, database queries, and data/permission models that support data aggregation and reporting on actionable insights. Passion to write code, which is efficient, organized, simple, scalable & reusable meeting business requirements. Enthusiasm to learn and find opportunities to enhance and adapt in daily engineering activities is highly desired. Deliver end-to-end JIRA User stories meeting quality expectations of the team. Collaborate closely with internal and matrixed departments, including Operations, Technology, PMO, Product Teams and Data Strategy/Insights to design, build, test, and deploy data warehouse solutions and reporting tools. Gain an understanding of multiple datasets to be able to perform ad-hoc analysis for different business units. Prepare effort estimation including researching and estimating costs of software development, unit testing. May provide estimates for upgrades of vendor packages upgrades and integration with existing systems. Participate in QA testing for data pipeline projects as well as implementation changes to the suite of analytical tools. Write reproducible data analysis over Peta bytes of data using innovative technologies. Mentor junior team members. Flexibility to work in second shift. Qualifications & Experiences: 9-12 years of experience in data engineeringBachelor's degree in computer science, information systems, or information technology.Experience with scripting languages: Python, Scala, AWS Glue etc.Experience in Conceptual/Logical/Physical Data Modelling & expertise in Relational and Dimensional Data ModellingAdvanced SQL knowledge & performance tuning & cost optimization of complex SQL'sStrong experience with MPP databases (Teradata & Snowflake)Experience with Snow pipe, tasks, streams, clustering, Time travel, Cache, data sharingExperience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Experience building and optimizing 'big data' data pipelines, architectures, and data sets.Experience with AWS cloud services - Athena, Quick sight, Kinesis, Lambda, IAM PoliciesExperienced in software delivery through continuous integration & automated deployments (for example git, bitbucket, Jenkins, etc.)Strong Experience with data pipeline and workflow management tools e.g., Airflow, RedwoodExperience with Atlassian Suite (JIRA, Confluence)Experience using and creating APIs.Experience supporting and working with cross-functional teams in a dynamic environment.Champions the capabilities and benefits of analytical data assets. Partner effectively across departments and stakeholdersMust be comfortable working in a Linux/Unix environment.Experience with ASW Analytics services like AWS Data Exchange, Data Pipeline, AWS Glue Data Brew, AWS Glue, AWS Lake Formation Familiarity with ASW Developer tools services like CodeDeploy, CodePipeline Experience with public/private API integration, web scrapping, data streaming architecture Knowledge of Business content interchange (BCI). Not Required but preferred experience: Public speaking and presentation skills.Experience with data science tools such as NumPy, Pandas, R, MATLAB etc.Experience with data processing engines like sparkExperience with DBT. How We Get Things Done... This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. The Legal Bits... Warner Bros. Discovery embraces the opportunity to build a workforce that reflects the diversity of our society and the world around us. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law.If you're a qualified candidate and you require adjustments or accommodations to search for a job opening or apply for a position, please contact us at [email protected]: . Date posted: 03/27/2024 11:58 AM
Sr Data Engineer, Ops Finance Transportation
Amazon, Hyderabad, TS, IN
DESCRIPTIONThe Amazon transportation controllership team is looking for a Senior Data Engineer who is data-driven and detail oriented, to partner in expanding our financial data systems to support our global transportation network. You should have excellent business and communication skills to be able to work with business owners and tech leaders to gather infrastructure requirements, design data infrastructure, build up data pipelines and data-sets to meet business needs. You stay abreast of emerging technologies, investigating and implementing where appropriate. This person must be strategic and have strong analytical skills, robust interpersonal and influencing skills, strong organizational and project management skills, and is a self-starter. They will need to roll up their sleeves, work both independently and collaboratively across teams, and have a high level of ownership and accountabilityKey job responsibilities- Architect and develop end to end scalable data infrastructure and data pipelines- Develop data products, deploy highly performant solutions, build data infrastructure leveraging AWS services (such as EC2, Redshift, EMR, Lambda, Quicksight etc.)- Improve existing solutions and come up with next generation Data Architecture to improve scale, quality, timeliness, coverage, monitoring and security.- Develop new data models and end to data pipelines.- Lead design reviews for other Data Engineers on the team and actively participate in design reviews of related tech products.- Provide technical and thought leadership for Data Engineering and Business Intelligence- Create and implement Data Governance strategy for mitigating privacy and security risks- Design, implement, and support a platform providing secured access to large datasets.- Interface with multiple finance and accounting customers, gathering requirements and delivering complete data solutions.- Develop ETL pipeline that links various datasets that could serve the analysis objectives in the most efficient way- Analyze and solve problems at their root, stepping back to understand the broader context.- Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWSA day in the lifeWork with the largest dataset in logistics management anywhere in the world to drive business insights, support operations and build analytical solutions for partners which can help drive cost control and decision making. As a Data Engineer, you will be working in one of the world's largest and most complex data warehouse environments. You will design, implement and support scalable data infrastructure solutions to integrate with multi heterogeneous data sources, aggregate and retrieve data in a fast and safe mode, curate data that can be used in reporting, analysis, machine learning models, Gen Ai and ad-hoc data requests. You will be exposed to cutting edge AWS big data technologies.About the teamTrans Controllership in Finance is one of the most challenging and exciting spaces to work for, with new geographies, new modes and new carriers being launched every day and being required to seamlessly integrate with the Transportation technical systems for automatic Ingestion of Freight invoices, thorough auditing, timely payment and for proper accounting to keep our books clean and accurate. We are open to hiring candidates to work out of one of the following locations:Hyderabad, TS, INDBASIC QUALIFICATIONS- 5+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with SQL- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS- Experience mentoring team members on best practices- - Design & Implementation of automated data contract and data quality enforcement mechanisms- - Expertise at least in 1 data Orchestration tool- - Working experience in CDKPREFERRED QUALIFICATIONS- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience operating large data warehouses- Master's degree- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets- Knowledge of distributed systems as it pertains to data storage and computing- - Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations- - Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy- - Experience providing technical leadership and mentoring other engineers for best practices on data engineering- - Strong understanding and experience with DBMS internals.- - Strong understanding of system performance and scaling- - Knowledge on LLM and Gen Ai.
Analyst, Data Engineer
TTEC, Andhra Pradesh, Hyderabad
TEC Digital Consulting’sInsights Practice is looking for a Data Engineer who is passionate aboutsolving complex data problems and intricate engineering issues with largesystems. We combine human insightand the speed of technology to transform our clients’ interactions with theircustomers. Advanced analytics of the customer experience and customerinteractions is our expertise. What you’ll be doing: The data engineer will be responsible for all aspects of executionand delivery for new and ongoing client implementations of the analyticplatform. This will require working with clients and coordinating with internalbusiness consulting, analyst, data science, and technology teams. Primary Responsibilities: Technical and functional analysis of customer implementation and integration requirementsDocumentation of implementation requirements and expected effortConfiguration and setup of the analytics platformConfiguration of data loaders and development and configuration of workflow processes and customizations to the platformParticipate in testingProactive identification of internal and external dependencies, highlighting issues, scope changes, and progress against project planCommunication of project status/issues to clients and internal managementPartner with various internal teamsProvide technical support to assist clients and partners during and post implementation. Required Experience and Skills:Bachelor’s Degree in Computer Science, Information Systems or related with 2-3 years of relevant experienceIn-depth familiarity with Big data technology and its application.Proficiency with the Azure or AWS ecosystemExperience with Big Data ETLUnderstanding of complex data flows, identification of data processing bottlenecks and designing and implementing solutions.Proficiency in .NET, C#, Python, Linux bash, Power Shell.Experienced in consuming third-party REST APIs (JSON) and SDKsA broad set of technical skills and knowledge across hardware, software, systems and solutions development and across more than one technical domain.Experienced in U SQL, PostgreSQ, TSQL Experience in professional services or technical consulting with enterprise software solutions, specifically enterprise software installation, configuration, customization, and testing.Proven ability to balance and manage multiple, competing priorities.Collaborative interpersonal skills and ability to work within cross-functional teams.Self-starter who relies on experience and judgment to plan and accomplish goals in complex fast-paced environment to ensure quality of all data integration points.Excellent customer service skills.Creative problem-solving and analysis skills.Ability to handle problem situations quickly, inventively, and resourcefully.Project management skills including:Ability to prioritize and manage tasksAbility to plan, commit, and deliver to schedulesAbility to identify, escalate, and manage project issuesWillingness to work extended hours on an as-needed basisEmployment Requirements: TTEC requires all employees hired in the India to successfully pass a background check including employment credentials, education, permanent and current address verification and if applicable, immigration and work permit documentation as a condition of employment. Depending on location and client program, a drug test may also be required as a condition of employment. TTEC is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status. You will be asked to share with TTEC as well as its representatives, all such personal information that it may require to conduct such background verification and also agree to TTEC and its representatives collecting and storing such personal information and transferring the same, whether in India or abroad.
Senior Analyst, Data Engineer
TTEC, Andhra Pradesh, Hyderabad
TTEC DigitalConsulting’s Insights practice is looking for a Data Engineer who is passionateabout solving complex data problems and intricate engineering issues with largesystems. This person willidentify and conduct data-driven projects and analyses that help our clientsmeet their objectives most effectively. The role also involves providingstrategic direction on the development of marketing campaigns, includingtargeting strategy, value proposition development, and post-program performanceevaluation. We combine human insightand the speed of technology to transform our clients’ interactions with theircustomers. Advanced analytics of the customer experience and customerinteractions is our expertise. The successful candidate will be an experienced,highly motivated self-starter with the ability to solve data analysisproblems. They will have experience using programming languages tomanipulate large amounts of data. The primary responsibility of thisrole is to analyze large datasets to answer business questions by generatingreports and output in Excel. The candidate will work in a team ofprogrammers and business analysts to develop insightful deliverables thatsupport data-driven marketing strategies. Primary Responsibilities · Manage and manipulate multiple large data sets,including: defining populations and variables, performing calculationsand summarizations, and creating solutions to address client businessquestions · Prepare complex analytic deliverables, combiningmultiple programming outputs to create cohesive reports in Excel · Help design analytic data setspecifications · Lead in delivery of large analytic projects· Mentor team members· Contribute to the definition and development ofkey findings and marketing-oriented conclusions Required Skills/Qualifications· Bachelor’s Degree required, Masters preferred,STEM or related field· 4+ years of professional experienceworking with large-scale relational databases, and ability to manipulate/analyze / maintain relational databases· Capability to execute complete analysis processfrom data extraction / manipulation through QC, report and delivery of data andreports· In depth knowledge of one or more of thefollowing: SAS, Teradata , SQL, R, Python, Unix Scripting· Ability to create automated process andknowledge on CronTab, Autosys or related tools· Aptitude in Microsoft Excel· Ability to work independently and mentor juniorteam members· Strong attention to detail· Strong verbal and communication skills· Comfortable working in team-oriented, deadlinedriven environmentEmployment Requirements: TTEC requires all employees hired in the India to successfully pass a background check including employment credentials, education, permanent and current address verification and if applicable, immigration and work permit documentation as a condition of employment. Depending on location and client program, a drug test may also be required as a condition of employment. TTEC is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or protected veteran status. You will be asked to share with TTEC as well as its representatives, all such personal information that it may require to conduct such background verification and also agree to TTEC and its representatives collecting and storing such personal information and transferring the same, whether in India or abroad.
Data Engineer I, SCOT - AIM
Amazon, Hyderabad, Any
BASIC QUALIFICATIONS- 1+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)- Experience with one or more scripting language (e.g., Python, KornShell)DESCRIPTIONSCOT's Automated Inventory Management (AIM) team is seeking talented individuals who enjoy the challenge of diving into complex problems and solving the unsolvable to help our executives make impactful business decisions. AIM teams owns Availability , Speed, Inventory & Auditing at Scale metrics that measures & improves the overall customer experience. We regularly work directly with executive leadership to improve that experience.As a Data Engineer, you will analyze large amounts of business data, solve real world problems, and develop metrics and business cases that will enable us to continually delight our customers worldwide. In this role, you will work closely with a team of Business Intelligence Engineers, Data Engineers and Data Scientists to automate and architect data pipelines, and to make the data more actionable to manage business at scale. Successful candidates will bring strong technical abilities combined with a passion for delivering results for customers, internal and external.Key job responsibilities• Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance .• Build and improve Data solutions by translating business requirements into robust, scalable, and supportable solutions that work well within the overall system architecture.• Own the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards to drive key business decisions.• Monitor and troubleshoot operational or data issues in the data pipelines.We are open to hiring candidates to work out of one of the following locations:Bangalore, KA, IND | Hyderabad, TS, INDPREFERRED QUALIFICATIONS- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasetsSalary: . Date posted: 04/06/2024 09:32 AM
Data Engineer, Translation Services
Amazon, Hyderabad, Any
BASIC QUALIFICATIONS- 3+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with SQLDESCRIPTIONAre you passionate about data and code? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions that process billions of records and supports handling of translations data using AWS technologies? Do you want to create the next-generation product for self-service data transformation? If so, Translation Services team is for you! The successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and customer obsessed about ensuring secondary language experiences are indistinguishable from primary language experiences at Amazon scale.Key job responsibilitiesAs a Data Engineer, you would own the architecture of DW solutions for the enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use case. You would use your business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data, and use it to deliver the data as service which will have an immediate influence on day-to-day decision making.A day in the lifeThe TS data analytics team processes data on billions of words translated per week and transforms it so that TS teams can use it for ML/LLM model training, quality audits for machine and human translators, invoicing, and data insights for tech and program teams. We divide our team between efforts to rearchitect 2 large data warehousing systems and implementing customer feature requests.About the teamOur vision is: Language will not stand in the way of anyone on earth using Amazon products and services. Our mission statement is: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.We are open to hiring candidates to work out of one of the following locations:Hyderabad, TS, INDPREFERRED QUALIFICATIONS- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)Salary: . Date posted: 04/10/2024 09:20 AM
Sr. Data Engineer
Amazon, Hyderabad, TS, IN
DESCRIPTIONAre you passionate about data and code? Does the prospect of dealing with massive volumes of data excite you? Do you want to build data engineering solutions that process billions of records and supports handling of translations data using AWS technologies? Do you want to create the next-generation product for self-service data transformation? If so, Translation Services team is for you!The goal of Intech Translation Services (TS) team is to provide high quality, on-time translations to Amazon business teams, enabling them to expand globally at scale. Our team has a direct impact on retail CX, a key component that runs our Amazon fly wheel.As a Data Engineer, you would own the architecture of DW solutions for the Enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use case. You would use your strong business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data, and use it to deliver the data as service which will have an immediate influence on day-to-day decision making. Key job responsibilities- Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (Datanet, Cradle, QuickSight etc.- Improve existing solutions and come up with next generation Data Architecture to improve scale, quality, timeliness, coverage, monitoring and security.- Develop new data models and end to data pipelines.- Lead design reviews for other Data Engineers on the team and actively participate in design reviews of related tech products.- Provide technical and thought leadership for Data Engineering and Business Intelligence within TS org.- Create and implement Data Governance strategy for mitigating privacy and security risksWe are open to hiring candidates to work out of one of the following locations:Hyderabad, TS, INDBASIC QUALIFICATIONS- 5+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines- Experience with SQL- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS- Experience mentoring team members on best practicesPREFERRED QUALIFICATIONS- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience operating large data warehouses
Sr. Staff Data Engineer ( Data Platform)
Discovery, Inc. (Formerly Scripps Networks Interactive), Hyderabad, Any
Every great story has a new beginning, and yours starts here. Welcome to Warner Bros. Discovery... the stuff dreams are made of. Who We Are... When we say, "the stuff dreams are made of," we're not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD's vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what's next...From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Sr Staff Data Engineer - Hyderabad, India . About Warner Bros. Discovery: Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media's premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses.For more information, please visit www.wbd.com . Meet our team: The Data & Analytics organization is at the forefront of developing and maintaining frameworks, tools, and data products vital to WBD, including flagship streaming product Max and non-streaming products such as Films Group, Sports, News and overall WBD eco-system. Our mission is to foster unified analytics and drive data-driven use cases by leveraging a robust multi-tenant platform and semantic layer. We are committed to delivering innovative solutions that empower teams across the company to catalyze subscriber growth, amplify engagement, and execute timely, informed decisions, ensuring our continued success in an ever-evolving digital landscape. Roles & Responsibilities: We are looking for a highly motivated Sr Staff Data Engineer to build a state-of-the-art data platform to solve various data-driven use cases across the organization. This platform will host various data products such as but not limited to, Subscription, Content, and Product Analytics, Personalization and Recommendation, Marketing & Ad-Sales enablement. You will be charged with building a new core data platform in the cloud - handles both streaming and batch data processing, capable of solving any big data initiatives in scope now or evolve in future as well. You will be helping data engineers, analysts, and scientists perform their functions by building highly scalable capabilities across the platform.This individual will bring in his/her expertise in a wide variety of big data processing frameworks (both open source and proprietary), large scale database systems (OLAP and OLTP), stream data processing, API Development, Machine learning operationalization, and cloud automation to build and support all the data needs across our platform.Take lead role in translating various business requirements in to engineering architecture.Build software across our entire cutting-edge data platform, including event driven data processing, storage, and serving through scalable and highly available APIs, with cutting-edge technologies.Change how we think, act, and utilize our data by performing exploratory and quantitative analytics, data mining, and discovery.Think of new ways to help make our data platform more scalable, resilient and reliable and then work across our team to put your ideas into action.Work closely with data analysts and business stake holders to make data easily accessible and understandable to them.Ensure data quality by implementing re-usable data quality frameworks.Develop and enforce data engineering, security, data quality standards through automation.Participate in supporting the platform 24X7.Be passionate about growing team - hire and mentor engineers and analysts.Be responsible for cloud cost and improving efficiency. What to Bring : Bachelor's degree in computer science or similar discipline15+ years of experience in software engineering and/or data engineeringAbility and willingness to learn any new technologies and apply them at work in order to stay ahead of the curve.Expertise in at least few programming languages - Java, Scala, Python or similar.Expertise in building and managing large volume data processing (both streaming and batch) platform is a must.Expertise in stream processing systems such as Kafka, Kinesis, Pulsar or SimilarExpertise in distributed data processing frameworks such as Apache Spark, Flink or similar.Expertise in SQL and No-SQL - Apache Cassandra, DynamoDB, MySQLExpertise in OLAP databases such as Snowflake or Redshift.Experience in operationalizing and scaling machine models is a huge plus.Experience with variety of data Tools & frameworks (example: Apache Airflow, Druid) will be a huge plus.Experience with Analytics Tools such as Looker, Tableau is preferred.Cloud (AWS) experience is preferredDirect to consumer digital business experience is preferredStrong interpersonal, communication and presentation skills. How We Get Things Done... This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. The Legal Bits... Warner Bros. Discovery embraces the opportunity to build a workforce that reflects the diversity of our society and the world around us. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law.If you're a qualified candidate and you require adjustments or accommodations to search for a job opening or apply for a position, please contact us at [email protected]: . Date posted: 04/17/2024 07:15 AM
Data Engineer, Foundational People Data Services, Amazon
Amazon, Hyderabad, Any
BASIC QUALIFICATIONS- 1+ years of data engineering experience- Experience with SQL- Experience with data modeling, warehousing and building ETL pipelines- Experience with one or more scripting language (e.g., Python, KornShell)- Experience with Java basics.DESCRIPTIONInterested in working with the latest technology in the cloud computing space? Amazon is looking for talented and enthusiastic application engineers to join the Foundational People Data Services Organization. Our team is responsible for integrating HR system to 100+ downstream systems. This job requires you to hit the ground running and your ability to learn quickly will define your success. Expertise on Core Java, SQL (Oracle/Presto), AWS services like Athena, S3, Batch, Lambda, DynamoDb, CloudWatch is a must. If you're thrilled by the idea of ownership, passionate about HR Data and comfortable with the pace of a company shipping software in cycles measured in weeks not years, we would love to connect with you.Key job responsibilities- Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda, Glue, Apache Airflow etc.- Build and deliver high quality data architecture and pipelines to support business analyst, data scientists, and customer reporting needs.- Interface with other technology teams to extract, transform, and load data from a wide variety of data sources- Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers- Provide support of incoming tickets, including extensive troubleshooting tasks, with responsibilities covering multiple products, features and services.A day in the lifeNo two days are the same in FPDS - Payroll Data Providers team - the nature of the work we do and constantly shifting threat landscape means sometimes you'll be working with developing a data pipeline that will ensure efficient delivery of data to partner teams, other days you'll be working on developing data platform that is capable of processing large amounts of data in a short time.Work/Life Harmony: Our team gives high value to work-life harmony and working on a hybrid work environment in which engineers spend 3 days working in the office.On-Call Responsibility: This position involves On-call responsibility. We work as a team to root-cause the issues and ensure our systems are resilient and fault-tolerant.About the teamPDP team is accountable for any Payroll batch outbound and real time integrations to enable New Country eXpansions (NCX), In Country eXpansions (ICX), Vendor transitions, Payroll compliance, legal needs and Payroll Operations and Vendor specific requirement changes. PDP team build continuous improvement initiatives related to real time integrations, defect and anomalies detection and reduction directly within Batch Integration Framework prior to the data reaching to our customers. PDP also drives new operational excellence features and mechanisms to improve the existing framework and processes which helps to enhance customer experience, reduces manual errors and correction of errors (COE's). PDP work is a business-critical function to ensure that payroll is processed for nearly 1.5MM employees in 63 countries across the world and pays Amazonians accurately on time. Currently, payroll integrations produce nearly 300 million records per day. The transfer of data is required to perform payroll, benefits, compensation, security, hiring, and separation activities for Amazonians.We are open to hiring candidates to work out of one of the following locations:Hyderabad, TS, INDPREFERRED QUALIFICATIONS- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissionsSalary: . Date posted: 04/22/2024 10:48 PM