DE EN Provider/privacy
CareersJob search
Experience - Experience in building and managing data pipelines. - experience with development and operations of data pipelines in the cloud (Preferably Azure.) - Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark - Deep expertise in architecting and data pipelines in cloud using cloud native technologies. - Good experience in both ETL and ELT Ingestion patterns - Hands-on experience working on large volumes of data(Petabyte scale) with distributed compute frameworks. - Good Understanding of container platforms Kubernetes and docker - Excellent knowledge and experience with object-oriented programming - Familiarity developing with RESTful API interfaces. - Experience in markup languages such as JSON and YAML - Proficient in relational database design and development - Good knowledge on Data warehousing concepts - Working experience with agile scrum methodology Technical Skills - Strong skills in distributed cloud Data analytics platforms like Databricks, HD insight, EMR cluster etc. - Strong in Programming Skills -Python/Java/R/Scala etc. - Experience with stream-processing systems: Kafka, Apache Storm, Spark-Streaming, Apache Flink, etc. - Hands-on working knowledge in cloud data lake stores like Azure Data Lake Storage. - Data pipeline orchestration with Azure Data Factory, Amazon Data Pipeline - Good Knowledge on File Formats like ORC, Parquet, Delta, Avro etc. - Good Experience in using SQL and No-SQL. databases like MySQL, Elasticsearch, MongoDB, PostgreSQL and Cassandra running huge volumes of data - Strong experience in networking and security measures - Proficiency with CI/CD automation, and specifically with DevOps build and release pipelines - Proficiency with Git, including branching/merging strategies, Pull Requests, and basic command line functions - Strong experience in networking and security measures - Good Data Modelling skills Job Responsibilities - Cloud Analytics, Storage, security, resiliency and governance - Building and maintaining the data architecture for data Engineering and data science projects - Extract Transform and Load data from sources Systems to data lake or Datawarehouse leveraging combination of various IaaS or SaaS components - Perform compute on huge volume of data using open-source projects like Databricks/spark or Hadoop - Define table schema and quickly adapt with the pipeline - Working with High volume unstructured and streaming datasets - Responsible to manage NoSQL Databases on Cloud (AWS, Azure etc.) - Architect solutions to migrate projects from On-premises to cloud - Research, investigate and implement newer technologies to continually evolve security capabilities - Identify valuable data sources and automate collection processes - Implement adequate networking and security measures for the data pipeline - Implement monitoring solution for the data pipeline - Support the design, and implement data engineering solutions - Maintain excellent documentation for understanding and accessing data storage - Work independently as well as in teams to deliver transformative solutions to clients - Be proactive and constantly pay attention to the scalability, performance and availability of our systems - Establishes privacy/security hierarchy and regulates access - Collaborate with engineering and product development teams - Systematic problem-solving approach with strong communication skills and a sense of ownership and drive
- Bachelor’s degree or Masters in Computer Science or relevant streams - Any Relevant cloud data engineering certification
Discounts for employees possible
Health Benefits
Mobile Phone possible
Company Retirement
Hybrid Work possible
Company car possible
Events for employees
Flexitime possible
Good public transport
Inhouse Doctor
Annual profit share possible
Barrier-free workplace
ContactMercedes-Benz Research and Development India Private Limited LogoMercedes-Benz Research and Development India Private Limited
Brigade Tech Gardens, Katha No. 119560037 BengaluruDetails to location
Join us