top of page

Hadoop Developer

Location :

Atlanta, GA, USA

Job Type :

Onsite

Experience :

10+ Years

About the Role

We Are Hiring 2.jpg

• Support Data and Analytics Platform, Information Management and Solution Delivery
• Ensures design and engineering approach for complex data solutions is consistent across multiple flows and systems, while building processes to support data transformation, data structures, metadata, data quality controls, dependency and workload management
• Responsible to define internal controls, identify gaps in data management standards adherence and work with appropriate partners to develop plans to close the same, lead concept and experimentation testing to synthesize the results and validate and improve the solution, document and communicate required information for deployment, maintenance, support, and business functionality
• May be required to mentor more junior Data Engineers and coach team members in delivery/release activities

Requirements:
• 3-6 years' experience in Hadoop stack and storage technologies, HDFS, MapReduce, Yarn, HIVE, sqoop, Impala , spark, flume, kafka and oozie
• Extensive Knowledge on Bigdata Enterprise architecture (Cloudera preferred)
• Excellent analytical capabilities - Strong interest in algorithms
• Experienced in HBase, RDBMS, SQL, ETL and data analysis
• Experience in No SQL Technologies (ex., Cassandra/ MongoDB, etc.)
• Experienced in scripting(Unix/Linux) and scheduling (Autosys)
• Experience with team delivery/release processes and cadence pertaining to code deployment and release
• Research oriented, motivated, pro-active, self-starter with strong technical, analytical and interpersonal skills
• A team player with good verbal and written skills, capable of working with a team of Architects, Developers, Business/Data Analysts, QA and client stakeholders
• Versatile resource with balanced development skills and business acumen to operate at a fast and accurate speed
• Proficient understanding of distributed computing principles. Continuously evaluate new technologies, innovate and deliver solution for business-critical applications

Desired skills:
• Object-oriented programming and design experience
• Degree in Computer Science or equivalent
• Experience with automated testing methodologies and frameworks, including JUnit
• Python IDEs (Django, Flask), data wrangling and analytics in a python based environment
• Fundamentals of Python Data Structures, Collections, Pandas for file and other type of data handling, visualizations, etc.
• Visual Analytics Tools knowledge (Tableau)
• Experience with Big Data Analytics & Business Intelligence and Industry standard tools integrated with Hadoop ecosystem (R , Python)
• Data Integration, Data Security on Hadoop ecosystem (Kerberos)
• Any Big Data certification (ex., Cloudera's CCP, CCA)

Requirements

HDFS, MapReduce, Yarn, HIVE, sqoop, Impala , spark, flume, kafka and oozie, HBase, RDBMS, SQL, ETL and data analysis

*Send your resume by clicking the apply button

© 2023 BinTech Group LLC.  All Rights Reserved.

  • Facebook
  • Whatsapp
  • Linkedin
bottom of page