Hadoop Big Data Developer
Summary
Title: | Hadoop Big Data Developer |
---|---|
ID: | 71820 |
Location: | Charlotte, NC |
Department: | Information Technology |
Description
JOB DESCRIPTION: "Spark, Scala/Python, HIVE, Hadoop, BIGDATA developer with Exposure to Cloud (Azure Preferably).
- 4-5 Years of experience in Building and Implementing data ingestion and curation process developed using Big data tools such as Spark (Scala/python), Hive, Spark, HDFS, Sqoop, Hbase, Kerberos, Sentry and Impala etc.
- Ingesting huge volumes data from various platforms for Analytics needs and writing high-performance, reliable and maintainable ETL code Strong SQL knowledge and data analysis skills for data anomaly detection and data quality assurance.
- .Hands on Experience on writing shell scripts. Complex SQL queries, Hadoop commands and Git.4
- Good Hands-on creating Database, Schemas, Hive tables (External and Managed) with various file formats (Orc, Parquet, Avro and Text etc.), Complex Transformations, Partitioning, bucketing and Performance optimizations.
- .Recent Exposure to Cloud will be a good to have. Azure will be preferred.6.Spark
- Complex transformations, data frames, semi-structured data, utilities using spark, Spark Sql and spark configurations.7.Proficiency and extensive Experience with Spark & Scala/Python and performance tuning is a MUST.
- Monitoring performance of production jobs and advising any necessary infrastructure changes.
- .Ability to write abstracted reusable code components.
- .Code versioning experience using Bitbucket and CI/CD pipe line.
Alternatively, you can apply to this job using your profile from Indeed by clicking the button below: