Location: Bangalore
Job Code: MACIN18937
Role: Big Data Engineer
Type of Commute: Hybrid
Skill Set: Java, Python, Spark, Hive, Hadoop, Databricks, JSON, XML
Desired Industry Experience: 5+ years
No. of open position: 1
Role Description:
Job Title: Big Data Engineer
Employment Type: Full-Time
Mode of Work: Hybrid
Location: Bangalore, India
Summary
We are looking for a high-energy and innovative Big Data Engineer with a strong passion for software development and large-scale data solutions. The selected individual will play an active role in developing and migrating enterprise-wide platforms and applications used for data acquisition, transformation, entity extraction, and content mining across multiple business units.
The ideal candidate will have hands-on experience working across the full technology stack and be proficient in cloud-native application development using modern technologies such as Java, Python, Spark, Hive, Hadoop, Databricks, JSON, XML, Microservices, PostgreSQL, GreenPlum, and NoSQL databases. This role offers the opportunity to collaborate across a global footprint and contribute to impactful, data-driven initiatives.
Key Responsibilities
Technical
- Collaborate with users, technical leads, project managers, and cross-functional development teams to design, develop, and deliver scalable data-driven software solutions.
- Actively code in Python and/or Java, leveraging technologies such as Apache Spark, Hive, Databricks, Git, Linux, AWS, PostgreSQL, and GreenPlum.
- Design and develop Big Data solutions using Hadoop, Spark, EMR, Hive, and related distributed processing frameworks.
- Build and integrate cloud-native applications and microservices-based architectures.
- Apply object-oriented design patterns to create enterprise-level, modular, and maintainable solutions.
- Demonstrate strong problem-solving skills to enhance existing processes, workflows, and system performance.
- Optimize and fine-tune Big Data pipelines for performance and reliability.
- Follow disciplined software development practices, adhering to coding standards, quality guidelines, and corporate policies.
Quality
- Translate business requirements into effective technical designs and high-quality implementations.
- Create and execute unit tests to ensure software quality and stability before delivery to QA.
- Maintain thorough documentation and ensure adherence to software best practices.
- Exhibit strong communication and collaboration skills, working effectively with cross-functional and cross-geography teams.
Project/Team
- Work in an Agile/Scrum development environment with a focus on continuous integration and iterative delivery.
- Contribute to the team’s success through knowledge sharing, peer reviews, and collaborative problem-solving.
Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Desired Skills:
Experience
- 5+ years of proven experience in a Big Data environment.
- Hands-on experience with distributed data processing, data pipeline development, and cloud-based data solutions.
Preferred Skills
- Experience with AWS or other cloud environments for Big Data workloads.
- Strong knowledge of Spark, Hive, Hadoop, Databricks, and data integration frameworks.
- Familiarity with CI/CD pipelines, version control (Git), and Linux-based development.
- Exposure to PostgreSQL, GreenPlum, and NoSQL data stores.
- Excellent analytical, debugging, and performance optimization skills.