Post a Job

All jobs

All jobs

Job Description


ektello is working with a online media leader to find two talented Data Engineers to join our Big Data Product team. This is a long-term contract and all contractors must be willing and able to work on ektello's W2. No C2C allowed for this role. While on contract, ektello offers paid holidays, paid vacation days, and medical benefit options.


Requirements



  • Developing and validating Big data products which runs on the large Hadoop cluster.

  • Developing and testing ETL process

  • Building big data and batch/real-time analytical solutions that leverage emerging technologies.

  • Perform data migration and conversion activities on different applications and platforms.

  • Design, develop and test data ingestion pipelines, perform end to end automation of ETL process and migration of various datasets.

  • Perform data profiling, discovery, analysis, suitability and coverage of data, and identify the various data types, formats, and data quality issues which exist within a given data source.


Preferred Requirements



  • Bachelor’s degree in Computer Science or equivalent education/training

  • 4- 5 years of Software development and testing experience.

  • 3+ years of Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.

  • 3+ years of programming experience in Scala, Java or Python

  • Experience with development and automated testing in a CI/CD environment. Knowledge of GIT/Jenkins and pipeline automation is must.

  • Experience with developing and testing real-time data-processing and Analytics Application System.

  • Strong knowledge in SQL development on Database and/or BI/DW

  • Strong knowledge in shell scripting

  • Experience in Web Services


Company Description

Search Current Career Opportunities: http://w.ektello.com/search-jobs


See full job description

Job Description


Ranked as a top Fortune 100 Financial Services company, our client has an opening for a Big Data Engineer to work in Charlotte, NC (will start out working remotely). This will initially be a 12 month contract assignment and has potential to extend up to 24 months.  


 


Imagine everything that you could learn to advance your career while working at a Fortune 100 company with around $2.0 trillion in assets. 


Check out the qualifications below.  If you feel that you meet these, please apply online.  Let us help you make that next great career decision!


 


Position overview:


 



  • Design high performing data models on big-data architecture as data services.

  • Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark and Amazon S3 based object storage architecture.

  • Partner with Enterprise data teams such as Data Management & Insights and Enterprise Data Environment (Data Lake) and identify the best place to source the data

  • Work with business analysts, development teams and project managers for requirements and business rules.

  • Collaborate with source system and approved provisioning point (APP) teams, Architects, Data Analysts and Modelers to build scalable and performant data solutions.

  • Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist

  • Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.

  • Work with DBAs in Enterprise Database Management group to troubleshoot problems and optimize performance

  • Support ongoing data management efforts for Development, QA and Production environments

  • Utilizes a thorough understanding of available technology, tools, and existing designs.

  • Acts as expert technical resource to programming staff in the program development, testing, and implementation process.


 


The ideal candidate would possess:



  • 10+ years of application development and implementation experience

  • 10+ years of experience delivering complex enterprise wide information technology solutions

  • 10+ years of ETL (Extract, Transform, Load) Programming experience

  • 10+ years of reporting experience, analytics experience or a combination of both 5+ years of Hadoop experience 5+ years of operational risk, conduct risk or compliance domain experience

  • 5+ years of experience delivering ETL, data warehouse and data analytics capabilities on big-data architecture such as Hadoop

  • 5+ years of Java or Python experience

  • Excellent verbal, written, and interpersonal communication skills

  • Ability to work effectively in virtual environment where key team members and partners are in various time zones and locations

  • Knowledge and understanding of project management methodologies: used in waterfall or Agile development projects

  • Knowledge and understanding of DevOps principles

  • Ability to interact effectively and confidently with senior management

  • Experience designing and developing data analytics solutions using object data stores such as S3

  • Experience in Hadoop ecosystem tools for real-time batch data ingestion, processing and provisioning such as Apache Flume, Apache Kafka, Apache Sqoop, Apache Flink, Apache Spark or Apache Storm


 


This is a W2 position with Advantage Resourcing.  Will not accept resumes from recruiters or agencies.


We offer benefits for purchase that include: Medical, Dental, and Vision Insurance as well as an excellent 401k plan.


 


Advantage Reference # 1008998




About Advantage Resourcing


Advantage Resourcing makes all employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, ancestry, medical condition, age, marital status, national origin, citizen status, political affiliation, union membership, genetic information, physical or mental disability, veteran status, denial of medical or family leave, pregnancy or pregnancy disability leave or any other protected group status as defined by federal, state or local law. We will provide reasonable accommodations throughout the application or interviewing process. If you require a reasonable accommodation, contact us. Advantage Resourcing is an E-verify employer.



Company Description

Let’s find your next job – together. Whether you’re looking for temporary work or a direct-hire job, Advantage Resourcing will connect you to an opportunity that closely matches your interests and skills. Advantage Resourcing is a proud member of Staffmark Group, an award-winning family of staffing brands with a national network of 450+ offices. We connect over 250,000 people to jobs each year, and we’re ready to put this expertise to work for you! Learn more at www.advantageresourcing.com.


See full job description

Job Description


Project Overview

The Big Data engineer will act as a key individual contributor and expected to have exceptional Hadoop technical expertise. Aptitude to learn new technologies, loves to push the boundaries in solving real business problems. This individual will be responsible for planning and executing Big Data analytics, predictive analytics and machine learning initiatives. The position requires strong understanding of Hadoop distributed systems and strong experience in using open source framework attention to details with deep technical expertise, excellent communications skills and exceptional follow-through. 

Primary Responsibilities  



  • Perform architecture design, data modeling, and implementation of Big Data platform and analytic applications. 

  • Translate complex functional and technical requirements into detailed architecture, design, and development 

  • Work on multiple projects as a technical lead driving user story analysis and elaboration, design and development of software applications, testing, and builds automation tools. 

  • Design efficient and robust Hadoop solutions to improve performance and end-user experience; 

  • Experience in Hadoop ecosystem implementation/administration, install software patches & upgrades and configuration. 

  • Conduct performance tuning of Hadoop clusters. 

  • Monitor and manage Hadoop cluster job performance, capacity planning, and security. 

  • Perform detailed analysis of business problems and technical environments and use this in designing the solution

  • Define and maintain data architecture, with a focus on creating strategy, researching emerging technology, and applying technology to enable business solutions 

  • Define and compute (storage & cpu) estimations formula for ELT & Data consumption workloads from reporting tools/ ad hoc users

  • Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings, adopt and implement these insights and best practices  



Required Qualifications


 



  • Minimum of 8 years of overall, hands-on experience with “Big Data” platforms and tools including Hadoop implementation experience including the following:

  • Prior experience in Data Warehouse and BI Analytics projects 

  • Minimum of 5 years of transformation and delivery in Hadoop ecosystem (such as Hadoop, Pig, Hive, Flume, Ozie, Avro, YARN, Kafka, Storm and Apache Ni-Fi)

  • Minimum of 5 years of proficiency in Spark, R, Python, Scala, Java/C++, SQL/RDBM, and data warehousing.

  • Minimum of 5 years of experience in architecture and implementation of large and highly complex projects using Hortonworks (Hadoop Distributed File System) with Isilon commodity hardware. 

  • Minimum of 5 years of proficiency in Unix shell scripting 

  • Minimum of 3 years of experience using different open source tools to architect highly scalable distributed systems. 

  • Hands-on experience with “Big Data” platforms and tools including data ingestion (batch & real time), transformation and delivery in Hadoop ecosystem (such as HIVE, Python, R) 
    Individual contributions on strategic initiatives and business critical initiatives 

  • Experience in evolving/managing technologies/tools in a rapidly changing environment to support business needs and capabilities 


 Preferred Qualifications 



  • Exposure to Healthcare Domain knowledge 

  • Good exposure to Hadoop Ecosystems and Isilon Storage 

  • Understanding of Machine Learning and Artificial Intelligence advanced analytics

  • History of working successfully with multinational, cross-functional engineering teams



Education      Bachelor Degree in computer science or equivalent required.



See full job description

Job Description


12 month W2 Contract position with our major pharmaceutical client in Mettawa, IL. Apply today!


Candidates must have HANDS ON development experience!


Job Description:



  • Create and maintain optimal data pipeline architecture,

  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL/NoSQL and AWS ‘big data’ technologies.

  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics

  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.


Must have experience using the following software/tools:



  • Experience with big data tools: Hadoop, Spark, Scala, HDFS - EMR, Cloudera etc.

  • Experience with relational SQL and NoSQL databases (one or more), including Postgres,S3, Redshift/Spectrum, Cassandra, Marklogic, Dynamo DB,Neo4j etc.

  • Experience with data pipeline and workflow management tools (one or more): Azkaban, Luigi, Airflow, Oozie, Autosys etc.

  • Experience with GCP/Azure/AWS cloud services as a platform and as an infrastructure

  • Enterprise experience in implementing cloud based solution stitching custom and managed services, such as VPC, SQS,API Gateway, EC2,Rout53, NAT, Lambda, CloudTrail, CloudWatch, Kinesis, S3 etc other cloud suites.

  • Experience with stream-processing systems: Storm, Spark-Streaming, Kinesis etc.

  • Experience with object-oriented/object function scripting languages: R, Python, Scala etc.

  • Working experience on reporting tools such as Qlik, ThoughtSpot, Tableau, Power BI etc.


 


Company Description

Swoon connects with job candidates one-on-one to learn exactly who they are and understand which of our Fortune 1000 clients would have their dream jobs. We form relationships, not just connections, and we pride ourselves on our contractor care initiatives.

Our accomplishments continue to increase each year, and we have received some of the highest honors in the industry. We were named a “Best Staffing Firm to Temp For” by Staffing Industry Analysts in 2019, 2018, 2017, 2015 and 2014.


See full job description

Job Description


 


Big Data Engineer - Hadoop, SQL, Python, AWS


 


**W2 Contract - minimum 12 months - Plano, TX (Remote to start)**


 


The main function of the Big Data Engineer is to develop, evaluate, test and maintain architectures and data solutions within our organization. The typical Big Data Engineer executes plans, policies, and practices that control, protect, deliver, and enhance the value of the organization’s data assets.


 


Responsible for completing our client’s transition into fully automated operational reports across different functions within Care (including repair operations, contact center, digital support, product quality and finance) and for bringing the Care Big Data capabilities to the next level by designing and implementing a new analytics governance model, with emphasis on architecting consistent root cause analysis procedures resulting in enhanced operational and customer engagement results.


 


Job Responsibilities:



  • Design, construct, install, test and maintain highly scalable data management systems.

  • Ensure systems meet business requirements and industry practices.

  • Design, implement, automate and maintain large scale enterprise data ETL processes.

  • Build high-performance algorithms, prototypes, predictive models and proof of concepts.


 


Day to Day duties:



  • Data collection – gather information and required data fields.

  • Data manipulation – Join data from multiple data sources and build ETLs to be sent to Tableau for reporting purpose

  • Measure & Improve - Implement success indicators to continuously measure and improve, while providing relevant insight and reporting to leadership and teams.


 


MUST HAVE QUALIFICATIONS



  • Analytical and problem solving skills, applied to Big Data domain

  • Proven understanding and hands on experience with Hadoop, Hive, Pig, Impala, and Spark

  • 5-8 years of Python or Java/J2EE development experience

  • 3+ years of demonstrated technical proficiency with Hadoop and big data projects


  • Top technical skills

  • Hadoop

  • SQL

  • Python/Shell Scripting (exchanging data between UNIX and other sources into Hadoop. All Hive tables we create will be pointed to the files in Hadoop)

  • AWS - ideal, but not a must have – some data comes from AWS S3


 


Education/Experience:



  • Bachelor's degree in a technical field such as computer science, computer engineering or related field required.

  • 5-7 years of experience required.

  • Process certification, such as, Six Sigma, CBPP, BPM, ISO 20000, ITIL, CMMI.


 


Additional Skills:



  • Ability to work as part of a team, as well as work independently or with minimal direction.

  • Excellent written, presentation, and verbal communication skills.

  • Collaborate with data architects, modelers and IT team members on project goals.

  • Strong PC skills including knowledge of Microsoft SharePoint.


Company Description

Search Current Career Opportunities: http://w.ektello.com/search-jobs


See full job description

Job Description


A large consumer goods organization is looking for a senior data scientist on a long-term contract basis. The candidate will design, develop and implement Advanced Analytics data infrastructure to enable analytics that will drive increasing sales, reducing costs, or optimizing the the company's assets.


Responsibilities:



  • Design, create and operate robust pipelines from internal / external sources to the Data Lake.

  • Data transformation to meet the requirements of the Data Science team.

  • Automate and optimize project analytical solutions (End-to-end)

  • Creation of unit tests.

  • Support of productive projects.

  • Work under Agile methodology with an advanced analytics team (Data Scientists, Data Translators)


Qualifications:



  • Experience in the development of ETLs (3 years)

  • Experience in database (3 years)

    • Azure Data Factory, Azure Databricks, Azure DevOps

    • CI / CD

    • SAP Data Services

    • Scrum



  • Python / Scala (2 years)

  • At least have experience in 2 different programming languages

  • Experience in cloud platforms (desirable Azure)

  • Basic knowledge of best programming practices.

  • Teamwork

  • Autodidact, ability to learn on their own

  • Orientation to results


Company Description

We are a talent network of data scientists and analytics experts. We help companies build or expand their analytics efforts by finding the right talent to join their teams on a project or full time basis. We combine technology with vast experience in analytics to help our clients move forward. https://chiselanalytics.com


See full job description
Filters
Receive Big Data Engineer jobs in in your inbox.
Receive jobs in your inbox

I agree to Localwise’s Terms & Privacy