All jobs

All jobs

Huckleberry Youth Programs (HYP) is a community-based youth agency, providing a full spectrum of services to homeless, runaway, and other at-risk youth and families in San Francisco and Marin counties. Our programs are characterized by a commitment to community-based, collaborative services; attention to the urgency, complexity, and ever-changing nature of our clients’ needs; and recognition of the value and dignity of youth and their capacity to become healthy, productive adults.  

The Director of Research and Evaluation oversees the collection, management, and analysis of HYP’s youth services data. This individual must be able to respond thoughtfully and judiciously—drawing on expertise in data systems, social services, and program evaluation—to the needs and goals of a highly adaptive organization with deep roots in the Bay Area. HYP has long-standing programs and services, but prides itself on being responsive to changing needs within the communities it serves. The ideal candidate is able to adapt data systems in kind, recognizing when to leverage existing functionality and when structural changes are needed. Further, this position will support the organization on its continued path toward being more data informed through all levels of operations. 

DUTIES: 


  1. Oversee all aspects of client and program data for the agency, including reporting, data integrity, and security 

  2. Administer HYP’s customized Salesforce CRM for tracking client services and outcomes, including managing users, permissions, reports, dashboards, data, and metadata 

  3. Develop, maintain, and analyze evaluations for all programs with input from program management; research appropriate measures for new projects as needed 

  4. Report on program goals, outcomes, and impact to staff, funders, board, and community 

  5. Develop Salesforce metadata and thoroughly document changes to meet existing and future program and funding requirements 

  6. Build program evaluation capacity at program management and executive levels across the organization 

  7. Build and refine existing tools (dashboards and reports) to allow program staff to access and respond to aggregate data in real time to make data-informed program decisions 

  8. Integrate HYP’s data systems with external databases, including government systems, importing and exporting data as needed, to comply with grant requirements 

  9. Migrate archival data to Salesforce 

  10. Review goals and outcomes with program directors and development department annually 

  11. Provide feedback to program directors, including data interpretation support, to inform/implement logic models and theories of change 

  12. Contribute program evaluation information to grant applications 

  13. Hire and supervise consultants, data management, and data entry staff 14. Serve as HIPAA Security Officer 

 

QUALIFICATIONS: 


  1. Advanced degree in public health, social work, psychology, human services or similar field, or equivalent experience where evaluation/research were prominent components of program 

  2. 3+ years Salesforce system administrator experience required, including customization and maintenance of custom objects 

  3. Salesforce developer experience required 

  4. 3+ years experience in social services program evaluation and research, including applied research methods 

  5. Familiarity with data requirements for grant writing, reporting, program evaluation, and compliance 

  6. Advanced data analysis skills; high proficiency with data analysis software 

  7. Advanced database development and maintenance skills, including relational data models 

  8. Excellent organizational and communication skills 

  9. Excellent formatting skills and experience creating well-structured, readable reports, presentations, and forms 

  10. Expertise in mental health, social work, youth development and/or public health research literature related to at- and in-risk youth services 

  11. Creative, team-building, cooperative approach to job performance and constructive, problem-solving orientation to all tasks in a culturally diverse environment 

Huckleberry offers competitive salaries and excellent benefits:  http://www.huckleberryyouth.org/wp-content/uploads/2019/04/2019-Benefits-Summary.pdf 

EQUAL EMPLOYMENT OPPORTUNITY: Huckleberry Youth Programs is an equal opportunity employer, committed to providing equal opportunity to its employees and applicants for employment without discrimination on the basis of race; color; ethnic background; religion; gender; gender identity or expression; sexual orientation; national origin; ancestry; age; marital status; pregnancy, childbirth, or other related medical condition; disability, including HIV- related conditions; or status as a covered veteran. This policy applies to every aspect of employment, including but not limited to: hiring, advancement, transfer, demotion, layoff, termination, compensation, benefits, training and working conditions. 

FAIR CHANCE: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

 

See who you are connected to at Huckleberry Youth Programs
Connect via:
See full job description

SCA Environmental, Inc. (SCA) is an established environmental consulting firm based in San Francisco, California. We have been in business over 25 years and offer environmental consulting, engineering and remediation services to private, municipal, and government clients throughout California. 

We are seeking a full-time Associate Level Environmental Geologist or Engineer to work on environmental and remediation projects. The successful candidate will manage and support projects throughout the Bay Area and Northern California, and will work with clients that include both the public and private sectors such as Public Works Departments, Ports, Transportation clients (Airports/Roads/Transit/Marine), Federal Clients, Caltrans, and private developers. The position requires excellent technical and project management in all areas of environmental site investigation, remediation, and consulting. Exceptional report writing skills and creative problem solving skills are a necessity for the position.   

Specific responsibilities of the position include but are not limited to the following:    


  • Development and management of field work, including soil, soil vapor, and ground water sampling; drilling oversight, soil logging, soil boring, monitoring well installation and well development; purging, sampling, and construction oversight

  • Provide construction oversight on environmental remediation projects, operations and maintenance of environmental remediation systems. 

  • Data interpretation and preparation of technical reports, work plans, health and safety plans, closure reports, technical memoranda, etc. ·

  • Performance of Phase I and II environmental assessments to support due diligence investigations. 

  • Coordination with and management of subcontractors and in-house field staff. 

  • Liaison between regulatory agencies, stakeholders and clients for project assignments. 

  • Supervision of personnel and assistance in staff development/training in technical areas. 

  • Ability to work in a consulting environment and handle multiple project assignments simultaneously while meeting deadlines and budgets. 

  • Understanding all aspects of project management, including scheduling, cost controls, subcontractor selection and coordination, and budget tracking. 

  • Manage client expectations and maintain routine communication with clients and regulatory agencies and stakeholders to support projects 

  • Perform client management and development to assist with the successful pursuit and winning of new environmental business in the Bay Area 

  • Preparation of proposals, and business development

  • Initiating and maintaining extensive contacts with clients in the public and private sectors

Qualifications


  • A Bachelor's degree in Environmental or Civil Engineering, Geology, or related discipline.  

  • Current Registration as a Professional Engineer, PE or Professional Geologist, PG is required.

  • 7+ years of progressive work experience in the environmental industry with at least 5 years of project and client management in environmental consulting

  • Must have prior soil and groundwater environmental investigation and remediation project experience.

  • Current 40 Hour HAZWOPER is strongly preferred.   

  • Flexibility and ability to work well independently and within a team environment.

  • Ability to effectively interact with clients, regulatory agencies, field operations, technical staff and subcontractors. 

  • Excellent communication skills, both written and verbal.  

  • Exceptional technical report writing skills

  • Proven track record with quality/budget/schedule tracking and scope-specific assignments.

  • Demonstrated ability to maintain clients/accounts for an extended period of time and exceptional client satisfaction record. ·

  • Proficient in MS Office Suite.   

BENEFITS AND COMPENSATION:

SCA offers an excellent compensation plan, which includes a number of company sponsored benefits available to all regular full-time employees, including: vacation time, paid holidays, medical insurance, life and long-term disability insurance; and 401k. Telecommuting (partial) is possible; however, the position requires a local presence in the Bay Area.   

HOW TO APPLY:

To apply, please submit detailed resume, cover letter, and project list to hr@scaehs.com. No attachments please.  

Qualified candidates will be contacted by email, and requested to provide additional information. Based on a review of these responses, shortlisted candidates will be invited for interviews.  

See who you are connected to at SCA Environmental, Inc.
Connect via:
See full job description

The selected candidate will work with our staff of environmental scientists, civil and mechanical engineers, safety professionals and industrial hygienists and spend approximately 75% of their time performing environmental engineering and/or industrial hygiene based assessments and monitoring involving asbestos and lead. Other types of projects that you will perform include indoor air quality investigations; sampling of air, soil and water; construction monitoring; and evaluation of buildings for hazardous materials.

Required Qualifications


  • 1+ years experience

  • CalOSHA Certified Site Surveillance Technician (CSST) or Certified Asbestos Consultant (CAC)

  • CDPH Lead Inspector/Assessor or CDPH Lead Sampling Technician Certification (preferred not required)

  • A reliable car, drivers license, and auto insurance for field work are REQUIRED

Additional Requirements

The position will include both field work and office work, and will require work on construction sites near heavy construction equipment, ability to lift up to 30 pounds and climb ladders, and enter crawl spaces and attics. The selected candidate must also be in good physical health and be able to wear respiratory and other personal protective equipment. The ability to work nights and weekends, which may occur up to 20% of the time, is also required.

Work Location:


  • Multiple locations

  • Remote/Work from home

To apply, please submit resume and cover letter to hr@scaehs.com. Include your resume in the body of the email. 

NO ATTACHMENTS PLEASE.

Important: Be sure to reference Job Code IH-1 in the subject line of your email.

SCA is an equal opportunity employer. 

 

See who you are connected to at SCA Environmental, Inc.
Connect via:
See full job description

Give2Asia is seeking an intern to work with the Programs Department starting January 27, 2020. Interns will work with members of the Programs team on their program development and grantmaking goals. Interns may also have the opportunity to work cross-functionally across departments (i.e. finance, marketing).

Responsibilities include:


  • Helping to prepare new proposals, custom programs, and donor reports.

  • Assisting with materials for donor correspondence and updates.

  • Conducting program research.

  • Blogging about interesting reports and events on the Give2Asia forum.

  • Researching and writing informational and educational materials.

  • Administrative tasks as needed (i.e. helping put together Board Polls, assisting in the due diligence process).

  • Report reviews and re-formatting.

  • Salesforce entry for organizations and grants.

The ideal candidate has, or is working towards, a bachelor’s or master’s degree and interest in International Philanthropy and/or Asian affairs; strong research, writing, and editing skills; computer and internet proficiency; excellent interpersonal skills; and a desire to learn.

Candidates must be available for a minimum of 20 hours per week for an approximately three-month period. Proposed start date is January 27, 2020. This is a volunteer position with a small stipend available for meals and transportation.

See who you are connected to at Give2Asia
Connect via:
See full job description

We are a Fire Investigation firm looking for a Research Associate. We need a person who has high attention to detail, great problem solving and computer skills, is a self starter, quick learner, and can stay calm under fire. If you are a quick witted person who can handle and prioritize multiple demands when things are busy, but can also be self motivated when things are slow, we are looking for you!

PRIMARY TASKS:


  • Perform research on project requests such as: manufacture defects and recalls, building/fire/electrical codes, permits and building plans, historical weather data, geospatial data and aerial photographs.

  • Management of post-fire scene data such as photographs and diagrams.

  • Draw computerized architectural diagrams.

  • Review and edit technical reports.

  • Summarize case files, depositions, reports, etc.

  • Proactively identify project issues. Facilitate resolution and communicate on and/or elevate issues as required to insure timely resolve.

  • Provide IT and administrative support.

Requirements:


  • Bachelor's Degree, or a combination of education and experience.

  • Science or Engineering background strongly desired.

  • Knowledge of Microsoft Office, Google Apps for Work, Adobe Acrobat. Ability to learn new computer programs.

  • Proven ability to support several projects simultaneously and under tight schedules.

  • Excellent verbal and written communication skills. Technical writing skills are desired.

  • Acute attention to detail with a commitment to excellence and high standards.

See who you are connected to at Fire Cause Analysis
Connect via:
See full job description

SCA Environmental, Inc. is a small environmental consulting firm with two local Bay Area offices. SCA works for many different types of clients, including cities, agencies, high-rise office building owners, banks, the US military, housing developers, non-profit groups, and manufacturing companies.

We currently have the following positions available in our San Francisco office:

Entry Level Environmental Specialist - San Francisco office (Job Code: ESP2-SF)

 

The selected candidate will work with our staff of environmental scientists, civil and mechanical engineers, safety professionals and industrial hygienists. The successful candidate will spend approximately 75% of their time performing environmental engineering and/or industrial hygiene based assessments and monitoring involving asbestos and lead. Other types of projects that you will perform include indoor air quality investigations; sampling of air, soil and water; construction monitoring; evaluation of buildings for hazardous materials; and historical site assessments.

The position will include approximately 75% field work and 25% office work over the course of the year. Note that SCA will train you in the necessary technical areas, so you do not need to have experience in all areas. The most important things you can bring to the job are a desire to learn, an ability to be flexible, and a willingness to work hard.

Qualifications & Experience:

• Bachelor’s degree preferred (job requires high school level Sciences, all majors welcomed as well as OPT)

• Excellent communication (oral and written) skills

• Excellent organizational skills

• Proficient with MS Office (Word, Excel).

• Must be able to work independently and as part of a team

• Ability to multi-task and work on multiple projects at the same time

• Must be physically able to climb a 20′ ladder, lift up to 50 pounds, enter crawl spaces and attics, and work on construction sites near heavy construction equipment and in outside weather conditions such as wet and/or humid conditions. Work may be conducted in locations where noise, fumes, dust, toxic materials are present.

• Participation in SCA’s Medical Surveillance Program, which requires the selected candidate to maintain a current medical clearance to work and wear respiratory protection

• A reliable car, drivers license, and auto insurance for field work are REQUIRED

• Ability to work nights and weekends, which occurs up to 25% of the time, is also required.

This is an entry-level position. To apply, please submit resume and cover letter to hr@scaehs.com. Include your resume in the body of the email. NO ATTACHMENTS PLEASE.  Be sure to reference the exact Job Code in the subject line of your email. 

No phone calls please.  

SCA is an equal opportunity employer.

See who you are connected to at SCA Environmental, Inc.
Connect via:
See full job description

Sr. Data Engineer We are growing!! Bridg is seeking a Senior Data Engineer who will be architecting highly scalable data integration and transformation platform processing high volume of data under defined SLA. You will be creating and building the platform that includes ingestion and transformation of data, data governance, machine learning, analytics and consumer insights. We’re a rapidly-growing start-up in the heart of Sawtelle Japantown, just blocks away from the 405 and the 10. Our loft-style office houses a passionate, hard-working team of ramen-slurpers, beachcombers, and LA-daydreamers. At our core, we’re a tech company, yes, but our people make the magic happen. At Bridg, you will be solving complex problems with a business that celebrates innovation and values your contributions. We want you to wake up each day excited to use cutting edge tools/technologies and software development practices. Our platform is a combination of a large scale near-real time data pipeline (10s of billions of data points of sale transaction data from major retailers) and over 100 microservices. Our current tech stack includes Flink, Kafka, Cassandra, ElasticSearch, AWS Athena, Glue, Redshift, EMR, DynamoDB, and Java Spring Boot based microservices. A collaborative and innovative culture, Bridg offers highly advanced technology to identify and track purchasers in a physical retail store. Bridg's technology relies on data science and probabilistic modeling to identify and track the purchasers. The Bridg platform gives retail chains the same level of customer insight (and revenue growth) as data-savvy online retailers like Amazon, leveraging hundreds of millions of daily data points. Qualifications3+ years working in Big Data and related technologies5+ years Java and Spring Boot experienceExperience building high-performance, and scalable distributed systemsAWS cloud experience (EC2, S3, Lambda, EMR, RDS, Redshift)Experience in a variety of relevant technologies including Cassandra, AWS DynamoDB, Kafka, AWS Kinesis, Elasticsearch, Machine Learning, Spark, Hadoop, Hive, PrestoExperience in ETL and ELT workflow managementFamiliarity with AWS Data and Analytics technologies such as Glue, Athena, Redshift, Spectrum, Data PipelineMS preferred, or BA/BS degree in computer science, related field, or equivalent practical experienceExperience with Agile methodologies preferredExperience at start-ups a plusExperience in a fast paced development environment idealMust pass background checkMust be able to work in the office full time.Must be able to reliably commute to the office daily. What we offerA fantastic opportunity to be part of a growing start-up. A chance to work with a passionate, driven and fun team.An incredible work environment fun, casual and fast-pacedMonthly team activities and outingsLoft-style office with plenty of break-out spaceFully stocked company snack area complete with every drink and snack your heart desiresGreat benefits Health, Dental, Vision, and VacationSecurity, Availability and Confidentiality RequirementsYou are responsible for protecting the credentials provided to you to access S3’s (and customer, where applicable) networks, systems and dataYou are responsible for maintaining the confidentiality of all S3’s customer data to which you are granted access. Any suspected compromises of S3’s proprietary data or customer data must be reported to Management immediately.You will adhere to the S3 Information Security Policy and Procedures and supporting standard operating procedures to protect Company systems and data.Respond to and resolve customer help desk requests [varies based on role]You will alert management immediately with any expected system or data compromises and/or system failure impacting the security, confidentiality, availability and integrity of S3 and customer data About Bridg: Bridg is a marketing software company that provides a CRM solution, email and SMS marketing, insights and analytics, mobile app and loyalty program development for restaurants and retailers. Powered by transaction data, Bridg builds unique 360º customer profiles to understand individualized behavior patterns, providing clients with deep data science used to create wide-reaching, effective personalized marketing campaigns that drive traffic and sales in a measurable way. Our headquarters is located in West LA / Santa Monica, and we offer competitive salaries, great benefits, and a high-energy environment with lots of room for personal and professional growth.


See full job description

Principal Data Engineer - Big Data Principal Data Engineer - Big Data - Skills Required - SPARK, EMR, Big Data, Hadoop, Data Lakes, MongoDB, AWS, Kinesis, Oracle, Flink

If you are a Principal Data Engineer with experience, please read on!

We are looking for a Lead Data Engineer that can both architect and lead efforts around our expanding data pipeline infrastructure. Our product has most recently been centered in the CRM space, but we are looking to expand far beyond that. Currently, we process millions of data points through multiple data pipelines to feed a suite of datastores and applications. We are preparing for +10x growth both in the volume of data processed and the speed in which that data can be available and actionable. To accomplish this we are looking for someone who can architect and lead the effort to build out highly scalable data solutions.

If you are highly motivated, super passionate about democracy, and want to join a close-knit team that is looking to build great things together, then this position may be for you. This is a full-time role in Raleigh-Durham, NC; Washington, DC; New York City, NY; Sacramento, CA; Austin, TX; Salt Lake City, UT or Chicago, IL.

Top Reasons to Work with Us


  1. Competitive Base Salary


  2. Full Medical/Health Benefits


  3. 401k


  4. Generous Vacation, Paid Holidays & PTO


  5. Fun, Exciting, Positive Culture!


What You Will Be Doing


  • Lead efforts to design, build, scale, and maintain multiple data pipelines


  • Architect highly scalable data solutions


  • Be a technical thought leader within the org


  • Work closely with business owners and external stakeholders to provide actionable data


  • Ensure data accuracy and reliability


What You Need for this Position

Requirements


  • Experience building large scale streaming and batch data pipelines


  • Experience using Big Data technologies (Spark, Flink, EMR, hadoop, data lakes, etc.)


  • Mastery of multiple databases (e.g. MongoDB, MySQL, etc.)


  • Understanding of data security best practices


Extras


  • AWS data technologies (e.g. Kinesis, Glue, RDS, Athena, Redshift, etc.)


  • Experience building out data warehouse infrastructure


  • DevOps or System Admin experience


  • Data Science exploration and modeling


  • So, if you are a Principal Data Engineer - Big Data with experience, please apply today!


Applicants must be authorized to work in the U.S.

CyberCoders, Inc is proud to be an Equal Opportunity Employer

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.

Your Right to Work In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.

Principal Data Engineer - Big DataCA-SacramentoNV1-1552180


See full job description

Job Description


 Social Media company has a newly created position available.  Excellent salary and benefits  Work location is downtown DC - metro accessible.


The Role:


We’re looking to hire a Data Engineer to help to further develop our analytics platform for audience and campaign data. The ideal candidate has a passion for data and big data technology, and will be joining a team of seasoned data engineers to take that passion to the next level!


How You Can Make An Impact:



  • Collaborate with other engineers to produce a robust, highly performant data pipeline including data ingestion, transformation, aggregation, and storage of data for analysis and modeling.

  • Orchestrate complex workflows to accommodate a range of large-scale data sets from our clients and partners across several targeted verticals.

  • Partner with our data scientists to deliver production-ready models that operate at scale to power superior advertising performance.

  • Develop the experience and know-how to keep our data pipeline operations on rails.


 


Requirements:



  • Bachelor’s or Master’s degree in Computer Science (or related field) or equivalent experience

  • Minimum of 2 years experience in software engineering, including at least 1 year working with big data tooling


Ideal Candidate:


  • Experience in:


      • Big data tech including Spark, Hadoop, and HDFS

      • Cloud-based analytics environments (such as Qubole or DataBricks)

      • Working within the AWS platform or Google Cloud Platform

      • Python or equivalent high-level language

      • Workflow management tools such as Airflow, AWS Step Functions, or Oozie


    • Domain experience highly desirable in the following areas:

      • Computational advertising and/or marketing technologies

      • Ingesting and transforming large-scale data sets (both in batch and streaming fashion)



    • Excellent leadership, written and verbal communication, analysis, design, development, and collaborative problem-solving skills



Company Description

We combine audience data, insights, creative and measurement to drive superior performance for leading brands. Do you want to be an industry leader? Good. We've been one since there's been an industry.


See full job description

BIG DATA HADOOP ENGINEER / MACHINE LEARNING ENGINEER-112727 1 year contractThe Machine Learning Engineer shall lead the ML Modeling and Engineering team. The consultant will play the role of technical lead and provide professional services to support the long term IT strategy and planning to include high level analysis, professional reports and presentations, and mentoring, support and training.DeliverablesThe tasks for the Machine Learning Engineer include, but are not limited to, the following:Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.Design, build and scale Machine Learning systems across multiple domains.Design and implement an integrated Big Data platform and analytics solution esign and implement data collectors to collect and transport data to the Big Data Platform.Technical Knowledge and Skills:Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.Strong Hands-on Experience in building, deploying and productionizing ML models using software such as SparkMLLib, TensorFlow, PyTorch, Python Scikit-learn etc. is mandatoryAbility to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatoryStrong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is mustDesign and implement an integrated Big Data platform and analytics solutionDesign and implement data collectors to collect and transport data to the Big Data Platform.4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.4-5 years of programming experience in Java, Scala, Python. Proficient inSQL and relational database design and methods for data retrieval.Knowledge ofNoSQL systems like HBase or CassandraHands-on experience inCloudera Distribution 6.xHands-on experience in creating, indexingSolr collections in Solr Cloud environment.Hands-on experience building data pipelines usingHadoop components Sqoop, Hive, Solr, MR, Spark, Spark SQL.Must have experience with developingHive QL, UDF’s for analyzing semi structured/structured datasets.Must have experience withSpring frameworkHands-on experience ingesting and processing various file formats likeAvro/Parquet/Sequence Files/Text Files etc. Hands-on experience working in Real-Time analytics likeSpark/Kafka/StormExperience with Graph Databases likeNeo4J, Tiger Graph, Orient DBMust have working experience in the data warehousing and Business Intelligence systems.Expertise inUnix/Linux environment in writing scripts and schedule/execute jobs. Successful track record of building automation scripts/code usingJava, Bash, Python etc. and experience in production support issue resolution process. Strong Experience withData Science Notebooks like RStudio,Jupyter, Zeppelin, PyCharm etc.Preferred Skills:Strong SQL skillsJava, Spring, Scala, Cloudera Hadoop, MLLib, Spark, HBase, Neo4j, Solr, Python, Machine Learning, Data Science Notebooks


See full job description

Job Responsibilities:


  • Implementation including loading from disparate data sets, preprocessing using Hive and Pig.

  • Scope and deliver various Big Data solutions

  • Ability to design solutions independently based on high-level architecture.

  • Manage the technical communication between the team and client

  • Building a cloud based platform that allows easy development of new applications

 

Qualifications:


  • 3-8 years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.

  • Ideally, this would include work on the following technologies:

  • Expert-level proficiency in Python (preferred)/R. Scala knowledge a strong advantage.

  • Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc..

  • Hands-on experience with Apache Spark and Pyspark

  • Basic data science skills such as clustering, regression and decision trees

  • Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works

  • In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners.


Education:
B.E/B.Tech in Computer Science or related technical degree


See full job description

Do you view data as an art and a science? So do we.

Avanade leads in providing creative digital services, business solutions and design-led experiences for its clients, delivered through the power of people and the Microsoft ecosystem. Our professionals combine technology, business, and industry expertise to build and deploy solutions to realize results for clients and their customers. Avanade has 34,000 digitally connected people across 24 countries, bringing clients the best thinking through a collaborative culture that honors diversity and reflects the communities in which we operate. We welcome all and seek talented individuals who can bring their whole self to work, build inclusive teams and encourage diversity inside and outside the organization. Majority owned by Accenture, Avanade was founded in 2000 by Accenture LLP and Microsoft Corporation.

How we support you:

We believe in gender equity and an inclusive community. We offer a comprehensive benefits package: generous vacation allowance disability coverage, retirement plans, paid maternity and paternity leave, life insurance, hotel and travel discounts, extended benefits to cover items that support your well-being, health, dental and vision insurance, professional development and paid Microsoft certification opportunities.

About you:

You draw on your considerable experience in bringing data and statistics to life to solve sometimes complex problems, and you're comfortable looking after several projects at once. You're able to make your own decisions while at the same time supporting more junior team members.

As a Big Data/PySpark Engineer at Avanade, you will have a deep understanding of the architecture, performance characteristics and limitations of modern storage and computational frameworks, with experience implementing solutions that leverage: HDFS/Hive; Spark/MLlib; Kafka, etc. You will have knowledge in Apache Spark and/or Python programming, deep experience in developing data processing using PySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations. Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations. Knowledge of python packaging, azure/data lake, and data bricks. Experience with ML. Manage large volumes of structured and unstructured data and extract & clean data to make it amenable for analysis. 80% travel is required.

Key Technologies:

Must-haves:
Apache Spark
Python
PySpark
Hadoop
SQL
Azure or AWS

Pluses:
Cosmos DB
TensorFlow
Apache airflow
Snowflake
Linux OS

.
Key Role Responsibilities:

Demonstrated expertise working with and maintaining open-source data analysis platforms, including but not limited to:
Pandas, Scikit-Learn, Matplotlib, TensorFlow, Jupyter and other Python data tools
Spark (PySpark), HDFS, Kafka and other high-volume data tools
SQL and NoSQL storage tools, such as MySQL, Postgres, Cassandra, MongoDB and ElasticSearch.
Deep understanding of the architecture, performance characteristics and limitations of modern storage and computational frameworks, with experience implementing solutions that leverage: HDFS/Hive; Spark/MLlib; Kafka, etc.
Technical background in computer science, data science, machine learning, artificial intelligence, statistics or other quantitative and computational science
Hands-on experience using Big Data and statistical analysis tools such as Hadoop/Spark, SQL
Experience transforming Data at a scale using Spark/PySpark
Working experience with Linux OS (Redhat/Ubuntu)
Excellent communication skills (speaking, presenting)

Preferred Years of Work Experience:
You likely have about 2-6 + years of relevant professional experience.


See full job description

Job Description


Responsibilities


· As a AWS Big Data Consultant, work with implementation teams from concept to operations, providing deep technical subject matter expertise for successfully deploying large scale data solutions in the enterprise, using modern data/analytics technologies on premise and cloud


· Create detailed target state technical, security, data and operational architecture and design blueprints incorporating modern data technologies and cloud data services demonstrating modernization value proposition


· Build, test and deploy solutions using cloud data services


· Provide business analysis and programming expertise within an assigned business unit/area in the analysis, design, and development of business applications.


· Participate in business and IT project estimation activities. Provide technical leadership for small to medium-scale projects. Utilize business knowledge to collaborate and offer technical solutions.


· Be self-guided and complete tasks with minimum assistance.


 


 


Company Description

Saicon has 20+ Years of IT Staffing and Consulting Experience. Headquartered in Overland, KS, we have 9 Offices Nationwide and have 3 Global Delivery locations. Saicon Specializes and has rich experience filling various type of job roles (Both IT & Non IT) in Retail, Consumer Products and Brands, Insurance, Logistics and Travel, Banking and Financials, Manufacturing, Healthcare and Life Sciences, Telecom, Media & Entertainment, Professional Services, Government and Public Sector.


See full job description

Immediate NEED!!

BIG DATA ENGINEER

-Gatherand process raw data at scale (including writing scripts, web scraping, callingAPIs, write SQL queries, etc.).

-Workclosely with engineering, sysops and devops teams to integrate data algorithmsand code into productionalized systems.

-Processunstructured data into a form suitable for analysis .

-Performdata analysis

-Supportbusiness decisions with ad hoc analysis.

-Significantexperience with Elastic Map Reduce. Experience with Hortonworks or ClouderaHadoop distributions may also be acceptable.

-Familiaritywith some or all AWS data services:RDS, DyanmoDB, DataPipeline, Quick Sight,Elasticache, Kinesis, S3

Education qualification


  • Bachelor's Degree REQUIRED in Computer Science, Computer Engineering, MIS or related field.

Sogeti is a leading provider of professional technologyservices, specializing in Application Management, Infrastructure Management andHigh-Tech Engineering. Sogeti offers cutting-edge solutions around Testing,Business Intelligence, Mobility, Cloud and Security, combining world classmethodologies and the global delivery model, Rightshore. Sogeti bringstogether more than 20,000 professionals in 15 countries and is present in over100 locations in Europe, the US and India. Sogeti is a wholly-owned subsidiaryof Cap Gemini S.A., listed on the Paris Stock Exchange.

At Sogeti USA, we are committed to building a long and enduringrelationship with our employees and to creating an environment that rewards andempowers. Our mission is to constantly exceed our employees' expectations inthe same way that we strive to exceed our clients' expectations. We offer anenvironment that celebrates innovation and helps you to achieve a good balancebetween your professional and personal life. We strive to be an employer ofchoice.

The benefits our employees enjoy:


  • 401(k) Savings Plan- Matched150% up to 6%. (Our 401k is in the top 1% of 401(k) plans offered in theUS!)


  • Bonus Program that pays up to$24K!


  • $12,000 in TuitionReimbursement


  • 100% Company-paid mobile phoneplan


  • Personal Time Off (PTO)-Ensuring a balance of work and home life


Sogeti is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.


See full job description

Our client is seeking a resourceful Software Engineer (Big Data Analytics) to help build the next-generation analytics platform. You'll create our cloud (AWS) based data processing pipeline, data processing infrastructure, and internal APIs. Our platform needs to support multiple types of scalability challenges - from handling terabytes of data to providing analytics in real-time. This is a critical role that will have a significant impact on the product direction, technology and business.

Responsibilities:


  • Build data processing pipeline in that collects, connects, centralizes, and curates data from various internal and external data sources

  • Architect scalable and reliable data engineering solutions for moving data efficiently across systems at near real-time

  • Build a distributed data store that will be central source of truth

  • Design, implement, test and deploy data processing infrastructure

  • Research and assess the viability of new processing and data storage technologies

  • Ability to take ownership and get things done in an Agile team setting

  • Help break down, estimate, and provide just-in-time design for small increments of work

Skills:

  • Big data technologies - Spark, Kinesis, Kafka, Storm or Hadoop

  • Software experience with Java

  • Source control (preferably Git) and bug tracking systems

  • Ability to reason about performance tradeoffs

  • Preferably have experience with SQL and AWS

  • Scala programming a plus

  • Must be comfortable working in an open, highly collaborative team environment

  • Experience with Big Data technologies

  • Software Development experience, including having worked with Java and exposure to big data technologies


  • Bachelors in Computer Science / related field or equivalent work experience



See full job description

Job Title : Sr. Big Data Engineer
Location:Raleigh, NC.



Job Description:
Should have 10 years of development experience with atleast 3 years as a Data Engineer with 1 year experience in AWS cloud.


  • Should have a strong fundamental knowledge on data pipeline and Big Data Lake Management in AWS

  • Should be able to design conceptual data pipe lines solution using Amazon Big Data Lake components and services for both batch and streaming data

  • Should have a strong programming background in Python and PySpark.

  • Must have hands-on experience on designing and implementing data pipeline using AWS EMR, Apache Spark and PySpark to transform and move large amounts of data into and out of AWS data stores and databases, such as Amazon S3 and external data repositories such as snowflakes and mongoDB

  • Knowledge and experience with Snowflake and MongoDB highly preferred

  • Understanding of integration with BI tools is a must have, Power BI integration with AWS preferred

  • Exposure and understanding of data science and experience with Spark Client preferred




Central Business Solutions, Inc,
37600 Central Ct.
Suite #214
Newark, CA 94560.


See full job description

Position Summary

Big Data Engineers serve as the backbone of the Strategic Analytics organization, ensuring both the reliability and applicability of the teams data products to the entire Samsung organization. They have extensive experience with ETL design, coding, and testing patterns as well as engineering software platforms and large-scale data infrastructures. Big Data Engineers have the capability to architect highly scalable end-to-end pipeline using different open source tools, including building and operationalizing high-performance algorithms.

Big Data Engineers understand how to apply technologies to solve big data problems with expert knowledge in programming languages like Java, Python, Linux, PHP, Hive, Impala, and Spark. Extensive experience working with both 1) big data platforms and 2) real-time / streaming deliver of data is essential.

Big data engineers implement complex big data projects with a focus on collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable deliverables across customer-facing platforms. They have a strong aptitude to decide on the needed hardware and software design and can guide the development of such designs through both proof of concepts and complete implementations.

Role and Responsibilities

Responsibilities include:


  • Translate complex functional and technical requirements into detailed design.


  • Design for now and future success


  • Hadoop technical development and implementation.


  • Loading from disparate data sets. by leveraging various big data technology e.g. Kafka


  • Pre-processing using Hive, Impala, Spark, and Pig


  • Design and implement data modeling


  • Maintain security and data privacy in an environment secured using Kerberos and LDAP


  • High-speed querying using in-memory technologies such as Spark.


  • Following and contributing best engineering practice for source control, release management, deployment etc


  • Production support, job scheduling/monitoring, ETL data quality, data freshness reporting


Skills Required:


  • 7+ years of Python or Java/J2EE development experience


  • 3+ years of demonstrated technical proficiency with Hadoop and big data projects


  • 5-8 years of demonstrated experience and success in data modeling


  • Fluent in writing shell scripts [bash, korn]


  • Writing high-performance, reliable and maintainablecode.


  • Ability to write MapReduce jobs


  • Ability to setup, maintain, and implement Kafka topics and processes


Skills and Qualifications

Samsung Electronics America, Inc. is committed to employing a diverse workforce, and provides Equal Employment Opportunity for all individuals regardless of race, color, religion, gender, age, national origin, marital status, sexual orientation, gender identity, status as a protected veteran, genetic information, status as a qualified individual with a disability, or any other characteristic protected by law.

  • Please visit Samsung membership at https://account.samsung.com/membership/pp to see Privacy Policy, which defaults according to your location. You can change Country/Language at the bottom of the page. If you are European Economic Resident, please click here at http://careers.eu.samsung.com/PrivacyNoticeforEU.html .

Samsung Electronics is a global leader in technology, opening new possibilities for people everywhere. Through relentless innovation and discovery, we are transforming the worlds of TVs, smartphones, wearable devices, tablets, digital appliances, and network systems, and the entire semiconductor industry with our memory, system LSI, foundry, and LED solutions. Samsung is also leading in the development of the Internet of Things through, among others, our Smart Home and Digital Health initiatives.

Since being established in 1969 , Samsung Electronics has grown into one of the worlds leading technology companies, and become recognized as one of the top global brands. Our network now extends across the world, and Samsung takes great pride in the creativity and diversity of its talented people, who drive our growth. To discover more, please visit our official newsroom at ( https://news.samsung.com/global/ ).


See full job description

Job Description


 


Position Overview


 


We believe the software world is embracing cloud in an exponential speed, and any company that doesn’t have a cloud strategy will not be relevant in 5 years.  Furthermore, any software company addressing data problems that does not have a cloud-native offering TODAY will lose the battle tomorrow. This position is the vital role to building that strategic intellectual property for Promethium.


 


What we want from you:



  • Ability to architect, prototype and build big data solution on top of AWS Bigdata products (later on Azure)


  • Relentless focus on the customer and business needs


  • Great communication skills, ability to work proactively and collaboratively



What you can get from this opportunity:



  • You will have the opportunity to work on the state-of-the-art cloud data processing technology.


  • You will own the cloud big data architecture solution .


  • You will work with a friendly and extremely technical engineering team to build a long lasting engineering legend



 


Requirements:



  • 8+ years experience architecting and implementing big data solutions


  • (Must have) 3+ years hands-on experience on major AWS Bigdata product such as Glue, EMR, Athena, Kinesis


  • Deep understanding to the architecture toward AWS Bigdata product such as Glue, EMR, Athena, Kinesis


  • Decent understand toward security related to big data solution is huge plus (particularly IAM)


  • Proficient in one of the languages in Python, Java or Scala


  • Effective communication, presentation, and collaboration skills is required



Company Description

Promethium is an augmented data management provider and the first company to combine natural language processing with self-service analytics, which allows users to tap their organization’s entire data estate for answers to questions asked in plainspoken language. Promethium’s AI and ML-driven contextual automation software delivers actionable insight within minutes instead of months while ensuring all data used to deliver information is fully governed. Just as Google Maps have simplified our personal lives, we aim to simplify the lives of our customers when it comes to analytics. Just ask your question and leave the rest to Promethium. We will make sure that you find exactly what you want and tell you where to get it. Simple, right? That’s the Promethium magic!


See full job description

At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.

Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.

Perficient is on a mission to help the Healthcare industry take advantage of modern data and analytics architectures, tools, and patterns to improve the quality and affordability of care. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.

Perficient currently has a career opportunity for a Data Developer in their market leading Data Solutions team.

Must be local to the Boston metro and/or willing to relocate to the Boston metro.

Job Overview:

As a Data Developer you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment and support of application developed for our clients. As a member working in a team environment you will work with solution architects and developers on interpretation/translation of wireframes and creative designs into functional requirements, and subsequently into technical design.


  • Lead the technical planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver.


  • Serve as a technical lead and mentor. Provide technical support or leadership in the development and continual improvement of service.


  • Develop and maintain effective working relationships with team members.


  • Demonstrate the ability to adapt and work with team members of various experience level.


  • SQL Proficiency is a must.


  • Proficiency with Spark withJava required


  • Knowledge of ETL and Stored Procedures in a major EDW (Netezza, Oracle, or Teradata preferred) platform is a must.


  • Experience migrating workloads from traditional data warehouse architectures to Hadoop is preferred.


  • Knowledge of data formats and ETL and ELT processes in a Hadoop environment including Hive, Parquet, MapReduce, YARN, HBase and other NoSQL databases.


  • Experience in dealing with structured, semi-structured and unstructured data in batch and real-time environments.


  • Experience with working in AWS environments including EC2, S3, Lambda, RDS, etc. Familiarity with DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira and Confluence.


  • Client facing or consulting experience highly preferred.


  • Skilled problem solvers with the desire and proven ability to create innovative solutions.


  • Flexible and adaptable attitude, disciplined to manage multiple responsibilities and adjust to varied environments.


  • Future technology leaders- dynamic individuals energized by fast paced personal and professional growth.


  • Phenomenal communicators who can explain and present concepts to technical and non-technical audiences alike, including high level decision makers.


  • Bachelors Degree in MIS, Computer Science, Math, Engineering or comparable major.


  • Solid foundation in Computer Science, with strong competencies in data structures, algorithms and software design.


  • Knowledge and experience in developing software using agile methodologies.


  • Proficient in authoring, editing and presenting technical documents.


  • Ability to communicate effectively via multiple channels (verbal, written, etc.) with technical and non-technical staff.


Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.

More About Perficient

Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.

Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.

Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.

Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.

LI#SB1

ID: 2019-8606

External Company Name: Perficient, Inc

External Company URL: www.perficient.com

Street: One Beacon Street, 15th Floor


See full job description

Why American Express?

**

Theres a difference between having a job and making a difference.

American Express has been making a difference in peoples lives for over 160 years, backing them in moments big and small, granting access, tools, and resources to take on their biggest challenges and reap the greatest rewards.

Weve also made a difference in the lives of our people, providing a culture of learning and collaboration, and helping them with what they need to succeed and thrive. We have their backs as they grow their skills, conquer new challenges, or even take time to spend with their family or community. And when theyre ready to take on a new career path, were right there with them, giving them the guidance and momentum into the best future they envision.

Because we believe that the best way to back our customers is to back our people.

The powerful backing of American Express.

Dont make a difference without it.

Dont live life without it.**You wont just shape the world of software.

Youll shape the world of life, work and play.

Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every day. Today, innovative ideas, insight and new perspectives are at the core of how we create a more powerful, personal and fulfilling experience for all our customers. So if youre interested in a career creating breakthrough software and making an impact on an audience of millions, look no further.

*You wont just keep up, youll break new ground. *

  • *

There are hundreds of opportunities to make your mark on technology and life at American Express. Heres just some of what youll be doing: * Taking your place as a core member of an agile team driving the latest development practices * Writing code and unit tests, working with API specs and automation * Identifying opportunities for adopting new technologies * Perform technical aspects of software development for assigned applications including design, developing prototypes, and coding assignments * Function as a leader on an agile team by contributing to software builds through consistent development practices (tools, common components, and documentation) * Lead code reviews and automated testing * Debug software components and identify code defects for remediation * Leads the deployment, support, and monitoring of software across test, integration, and production environments. * Empower teams to automate deployments in test or production environments * Empower teams to automatically scale applications based on demand projections

Leadership * Takes accountability for the success of the team achieving their goals * Drives the teams strategy and prioritizes initiatives * Influence team members by challenging status quo, demonstrating risk taking, and implementing innovative ideas * Be a productivity multiplier for your team by analyzing your work flow and contributing to enable the team to be more effective, productive, and demonstrating faster and stronger results. * Mentor and guide team members to success within the team

Are you up for the challenge? * Bachelors Degree in computer science, computer science engineering, or related experience required; advanced degree preferred * Experience with Agile or other rapid application development methods and tools preferably Agile Scrum and SAFE Agile plus Agile Tools like Rally/JIRA * Ability to effectively interpret technical and business objectives and challenges and articulate solutions * Willingness to learn new technologies and exploit them to their optimal potential * 7 years of wide breath of engineering experience in application design, software development, automated testing, and production support in a professional environment and/or comparable experience such as: o Demonstrated experience leading teams of engineers o Experience with API development o Experience with Web Services, Cloud Integration and Development o Hands-on expertise with application design and software development in Big Data across one or more platforms, languages, and tools (e.g. Java, J2EE, Big Data Components/ Frameworks Hadoop, HBase, MapReduce, HDFS, Pig, Hive, Python, Spark, Spring Boot, Elasticsearch, etc.) o Experience with distributed (multi-tiered) systems, and relational databases o Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Subversion, code/security review tools o Experience with Dev Ops, CI/CD and Automated testing tools such Sonarqube, Jenkins, JMeter, etc. o Familiarity with machine learning techniques and algorithm such as Regression, Clustering, Random Forest, Time Series Forecasting, etc.At the core of Software Engineering * Every member of our team must be able to demonstrate the following technical, functional, leadership and business core competencies, including: * Agile Practices * Porting/Software Configuration * Programming Languages and Frameworks * Business Analysis * Analytical Thinking * Business Product KnowledgeEmployment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions.

Job Technology

Title: Senior Big Data Java Engineer

Location: Arizona-Phoenix

Requisition ID: 19018914


See full job description

Description

  • Data Engineering * SQL * Python * PostgreSQL * MySQL * Data Engineer *

Robert Half Technology is looking for a Data Engineer to work for their client in Santa Monica that is a Tech Ad Agency. The Data Engineer candidate should have at least three years of Python experience and has working knowledge in SQL. The responsibilities for the Data Engineer position includes scripting in Python, managing large data sets, and analyzing the data in the tech industry. This is a great opportunity to learn new data engineer skills on the job and be part of one of fastest companies in the LA market.

If you are interested in this opportunity, please call Vice President - Division Director Jimmy Escobar at (310) 209-6838 or Jimmy.Escobar@RHT.com - https://www.linkedin.com/in/jimmyescobar/

Requirements

*Python - MUST

*SQL

*PostgreSQL

*MySQL

*Data Engineer

Technology doesn't change the world. People do.

As a technology staffing firm, we can't think of a more fitting mantra. We're extreme believers in technology and the incredible things it can do. But we know that behind every smart piece of software, every powerful processor, and every brilliant line of code is an even more brilliant person.

Leader among IT staffing agencies

The intersection of technology and people it's where we live. Backed by more than 65 years of experience, Robert Half Technology is a leader among IT staffing agencies. Whether you're looking to hire experienced technology talent or find the best technology jobs, we are your IT expert to call.

We understand not only the art of matching people, but also the science of technology. We use a proprietary matching tool that helps our staffing professionals connect just the right person to just the right job. And our network of industry connections and strategic partners remains unmatched.

Apply for this job now or contact our branch office at 888-490-4429 to learn more about this position.

All applicants applying for U.S. job openings must be authorized to work in the United States. All applicants applying for Canadian job openings must be authorized to work in Canada.

2019 Robert Half Technology. An Equal Opportunity Employer M/F/Disability/Veterans.

By clicking 'Apply Now' you are agreeing to Robert Half Terms of Use.

Salary: $52.25 - $60.50 / Hourly

Location: Santa Monica, CA

Date Posted: November 30, 2019

Employment Type: Temporary

Job Reference: 00320-0011280753

Staffing Area: Technology


See full job description

Job Description




Summary


The Big Data Engineer is responsible for building scalable data platforms, and large-scale processing systems that enable advanced analytics and support data teams across the enterprise. This hands-on role requires some experience working with AWS cloud platform in addition to expertise in a variety of technologies. The Big Data Engineer will develop and manage the enterprise data warehouse while sourcing data from various databases/applications & web APIs using stream and batch processing architectures.



Responsibilities



  • Responsible for designing, building & managing the advanced analytics platform to support downstream data science and the business intelligence teams

  • Work with the product owner, data scientist and various internal data users to understand the business requirements and implement optimal data solutions

  • Keen eye towards configuration driven approach to automate repeatable processes and tasks

  • Build and operate stable, scalable and highly performant data pipelines that cleanse, structure and integrate disparate big data sets into a readable and accessible format for end user analyses and targeting.

  • Develop data quality and governance framework that supports data lineage and ensures delivery of high-quality data to internal and external stakeholders

  • Using analytical and problem-solving skills to take complex business requests and transform them into clean, simple data solutions.

  • Implement/improve version control, deployment strategies, notifications to ensure product quality, agility and recoverability.

  • Understand and work with technology & IT teams to support database procedures, such as upgrade, backup, recovery, migrations, etc.



Role Specific Skills



  • Designing, building and operationalizing large-scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Spark, EMR, S3, EC2, DynamoDB, RedShift, Kinesis, Lambda, Glue, Snowflake etc.

  • Hands-on experience analyzing, re-architecting and re-platforming on-premise data warehouses to data platforms on AWS cloud using AWS/3rd party services.

  • Knowledge about other NoSQL databases, such as MongoDB, Cassandra, HBase, etc.

  • Building and migrating the complex ETL pipelines on Redshift and Elastic Map Reduce to make the system grow elastically

  • Hands-on knowledge in using advanced SQL queries (analytical functions), experience in writing and optimizing highly efficient SQL queries

  • Proficiency with Python scripting

  • Experienced in testing and monitoring data for anomalies and rectifying them.

  • Advanced communication skills to be able to work with business owners to develop and define key business uses and to build data sets that address them.

  • Experience in working with Data visualization tools such a Tableau



   Professional Skills


These are the professional skills we would expect from an individual fully established in this role.



  • Customer Service - Advanced

  • Verbal Communication - Advanced

  • Written Communication - Advanced

  • Teamwork - Advanced

  • Relationships proficient

  • Learning Agility - Expert

  • Analysis - Expert

  • Problem Solving - Expert

  • Process Orientation - Expert

  • Prioritization - Proficient



Qualifications



  • Bachelors degree or equivalent in an engineering or technical field such as Computer Science, Information Systems, Statistics, Engineering, or similar.

  • 5-10 years of quantitative and qualitative experience in building ETL data flows in Big Data Ecosystem.



___________________________________________________________________________________


Please note, this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities, and schedule may change at any time with or without notice.



SMS Assist is an Equal Opportunity Employer (EOE) that welcomes and encourages all applicants to apply regardless of age, race, color, religion, sex, sexual orientation, gender identify and/or expression, national origin, disability, veteran status, marital or parental status, ancestry, citizenship status, pregnancy or other reasons prohibited by law.



#ZP


#Indeed




Company Description

SMS Assist is a total outsourced facilities maintenance solution that delivers services to national and Fortune 500 customers through its cloud-based technology platform. The SMS Assist network of over 28,000 affiliates is integrated through technology, enabling it to deliver lower cost, higher quality service at over 120,000 locations.

Watch this video to learn more about SMS Assist's Smart Maintenance Solutions!
http://youtu.be/aViAHTEQiRU


See full job description

Job Description


Essential Duties & Responsibilities;


· Be able to effectively communicate with both technical and business stakeholders for requirements analysis and will be highly proficient in the technical architecture, design, and development of database design.


· Provide Project, ITE, ODTR and Production support including analyzing incidents and identifying the root cause.


· Migrating data and related functionality from legacy systems to modernized solutions.


· Developing and managing data processes to ensure that data is available and usable


· Creating data platforms, integration architectures, and pipelines


· Managing and monitoring data via automated testing frameworks (Data-Driven Testing, TDD, etc.)


· Ensuring that data is consistently available and of sufficient quality to be considered fit for use


· Design and perform all activities related to big data architecture components between environments during development and deployment.


· Work with Business Analysts and leads to transition the functional understanding of development assignments to themselves and developers they supervise.


· Working closely with data architects, data scientists, and data visualization developers to design, build, test, deliver, and maintain sustainable and highly scalable data solutions


· Supervise work of Big Data Developer including assigning work, providing technical direction and performing code reviews


Knowledge, Skills, and Abilities;


· Experience with Big Data technologies includingHadoop, Hive, Spark.


· Exhibit exceptional technical skills in database architecture, database design, and ETL.


· Displays knowledge of the proper way to adhere to the Software Development Life Cycle (SDLC).


· Demonstrate tool expertise in front end and backend tools.


· Experience in SQL-based technologies


· Excellent analytical and problem-solving skills to quickly recognize, isolate, and resolve technical problems.


· Experience in Architecting big data


· Hands-on experience with implementation and support of a business intelligence reporting suite.


· Understand business requirements and able to create/propose solutions.


· Ability to work independently, prioritize tasks appropriately and adapt quickly to project changes. Education, Experience,& Certifications


· Minimum of 5-7+ years of experience.


· Bachelor’s degree in Engineering, Mathematics, Computer Science, Information Systems, Economics or Business, or equivalent.


Company Description

We are a young and energetic staffing service provider- with the aim to provide quality staff for all your work needs. We believe in the hidden potential of people and want to bring it out to its fullest capacity.

Website;
http://altakgroup.com/


See full job description

Job Description


Big Data Engineer


Our direct client, a fast-growing software and data analytics firm in Greenwich, is seeking to bring on a mid to senior level Big Data Engineer. You will be part of the team to design and implement solutions integrated into client’s analytics Hadoop/EMR, Spark and Elasticsearch systems. The ideal candidate is an experienced application developer with a concentration in data pipeline building, automation, data warehousing and data modeling. The successful candidate will be responsible for expanding and optimizing data, analytics and data pipeline architecture, as well as optimizing data flow and building out a semantic layer to support cross functional teams. The Data Engineer will support software developers, business analysts and data scientists on data and computational initiatives and will ensure optimal data architectures and patterns are consistent throughout ongoing projects.


Responsibilities:



  • Collaborate with the infrastructure team to ensure optimal extraction, transformation and loading of data in the current and future systems using technologies like Redshift, Hive, S3, PySpark, Elasticsearch, R, EMR.

  • Generation of the client specific multi-tenant large, complex data layers that meets and exceeds the demanding functional / non-functional needs of our SaaS based Web Application.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing solutions for greater scalability.

  • Develop Analytic Dashboards, utilizing tools like Tableau and RStudio, that utilize our data pipeline metrics to provide actionable insights, operational efficiency and other key technical and business performance enhancing opportunities.

  • Experience supporting and working with cross-functional teams in a dynamic environment.

  • Strong organizational, oral and written skills.


Qualifications:



  • Candidate with 8+ years of experience in a Data Engineer role, who has attained at minimum a Bachelor’s degree in Computer Science.  

  • Strong analytic and modeling skills in working with both structured and unstructured data

  • Successful candidate will have experience and proficiency in the following software / tools:

      • Big data tools: Hadoop, Spark 2 (pyspark), Kafka, Elasticsearch, Hive.

      • Expertise in Redshift or equivalent columnar databases.

      • Data pipeline and workflow management tools i.e.: HortonWorks HDF, Lambda, Oozie, Airflow

      • Working knowledge of message queuing and stream processing (Amazon MQ, SQS)

      • Experience with Languages Java and/or Scala a plus.

      • Experience with RStudio including formatted tables, plots and LaTeX a plus




#ZR

 


See full job description

Job Description


Big Data Engineer


Location: Herndon, VA


Why CMCI?


CMCI provides superior management consulting and IT services that empower enterprises to achieve their business goals in today's highly competitive market. Our goal is to seamlessly integrate into each customer's organization in order to fully understand their business and technology needs. This approach allows us to quickly deliver superior quality solutions while achieving the highest level of customer satisfaction on time and within budget.  By choosing CMCI, you are choosing a company that can deliver on business outcomes and mission needs in the most cost-effective manner and without sacrificing capability.  As a part of CMCI's culture of loyalty and commitment to its employees, CMCI is committed to provide a tremendous career path by promoting employees to their highest potential.


 


Job Description



Responsibilities include:


·        Perform development and maintenance of end-user focused, object-oriented, data-driven analytic applications to support CBP threat analysis and targeting.


·        Independently identify technical solutions for business problems, directly contribute to conceptual design, and routinely collaborate with Enterprise/Application architects, Database Architects, Data Scientists, and mission stakeholders.


·        Develop new code, modify existing application code, conduct unit and system testing, and engage in rigorous documentation of developed and delivered application use cases, data flows, and functional operations.


·        Integrate with, and materially contribute to, project portfolio teams as a matrixed resource to provide development and issue resolution expertise in collaboration with data scientists, intelligence analysts, developers, and other participants at the direction of a project manager. 


Required Skills


·        Demonstrated expertise in Java and Object-Oriented Design and development principles.


·        Experience with Java 8+ and the newer language features.


·        Experience with the Hadoop eco system, including HDFS, YARN, Hive, Pig, and batch-oriented and streaming distributed processing methods such as Spark, Kafka, or Storm


·        Experience with distributed data/computing tools such as Hadoop, Spark, Impala, etc.


·        Experience with one or more relational database systems such as Oracle, MySQL, Postgres, etc.


·        Experience with SQL.


·        Experience with one or more build tools such as Maven, Gradle, etc.


·        Software Configuration Management (SCM) using Git.


·        Comfortable with Linux.


·        High level of self-motivation, desire to deliver stellar solutions and willingness to work in a distributed team environment.


Desired Knowledge and Experience


·        Experience delivering solutions using Amazon Web Services (AWS EC2, RDS, S3)


·        Experience with distributed search engines like Elasticsearch or Solr


·        NoSQL database systems such as Cassandra, MongoDB, DynamoDB, etc.


·        Familiarity with Atlassian tools such as Jira, BitBucket


·        Continuous integration with Jenkins or Bamboo.


Clearance Requirement:  We are looking for candidates with either CBP clearance or DOD top sec clearance.


Education Qualification:


·        Master’s degree in computer science or related field


All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, d


 


Company Description

Why CMCI?
CMCI provides superior management consulting and IT services that empower enterprises to achieve their business goals in today's highly competitive market. Our goal is to seamlessly integrate into each customer's organization in order to fully understand their business and technology needs. This approach allows us to quickly deliver superior quality solutions while achieving the highest level of customer satisfaction on time and within budget. By choosing CMCI, you are choosing a company that can deliver on business outcomes and mission needs in the most cost-effective manner and without sacrificing capability. As a part of CMCI's culture of loyalty and commitment to its employees, CMCI is committed to provide a tremendous career path by promoting employees to their highest potential.


See full job description

Job Description


Big Data Engineer
Global Leader in eCommerce
Boston, MA


THE COMPANY


A globally-renowned eCommerce business is looking for a Big Data Engineer to join their team in the Greater Boston Area. If you are ambitious, want to join a centralized, data-driven organization and have a solid background in leading big data-driven engineering teams then we want to speak to you. This is the perfect role for someone who enjoys a fast-paced environment using the newest technologies available.


THE ROLE - Big Data Engineer



  • Architect, design and implement big data platforms and solutions

  • Meeting with stakeholders and analytical team leaders to define and develop the company's infrastructure and roadmap to provide functional support to all areas of the business

  • Implement data-driven solutions that are both robust and scalable

  • Build systems that support data analytics through ETL transformations and data modeling

  • Develop easy to consume data sets while becoming a subject matter expert for your business unit


YOUR SKILLS AND EXPERIENCE



  • Experience implementing end to end data solutions

  • Custom BI tool experience using tools like Tableau, Qlik, PowerBI or Looker

  • Python and SQL Server programming experience

  • Big Data Ecosystem experience leveraging technologies like Hadoop, Hive, Spark, Sqoop and Kafka

  • Cloud Computing Experience - GCP and BigQuery experience highly preferred


THE BENEFITS


You will receive a total compensation of up to $160k, industry-leading benefits, a strong bonus package, the opportunity to grow into a management role and much more.


HOW TO APPLY


Please register your interest by sending your CV to Tommy Daughtrey via the Apply link on this page.


Company Description

Data and Analytics recruitment is our core business and we’re proud to say, our customers believe we’re good at it. In our most recent customer satisfaction survey, 95% of respondents said that they would recommend Harnham.

Harnham has actively chosen to focus on Data and Analytics, we’ve immersed ourselves in this market and are now an integral part of this business community.

Our capability has grown to provide recruitment services and advice across the Marketing Analytics, Credit Risk, Data Science, Data and Technology and Digital sectors.


See full job description

Job Description


Our direct client is looking for a Big Data Engineer. The position is based out of Rockville, MD. Multiple Location(s): Washington, D.C. Metro Area and New York City


Job Title: Big Data Engineer


Location: Rockville, MD may have additional location of Reston, VA. Multiple Location(s): Washington, D.C. Metro Area and New York City


Experience levels: Accepting Junior 4-6 yrs, Mid 8-10 yrs, Senior 10+ years


Duration: Long term. After 6 months, client will hire candidates who are open for full-time permanent.


Work Authorization: US Citizen, GC, EAD, OPT, H1B, etc


Number of openings: 40


Start Date: Immediate


Interview Process:


Local candidates phone and On-Site.


Non- Local candidates’ phone and Video


Project Details:


In the Market Regulation Surveillance Patterns technology team, an expert in this role is vested with responsibilities arising from FINRA’s mission to protect the integrity of US Securities Capital markets. FINRA has a portfolio of ‘surveillance patterns’ that look for manipulative and non-compliant behavior in the database of all transactions that occur in the stock market. The database itself consists of tens of billions of records per day. Engineers work with these large volumes of data using state-of-the-art and industry standard technologies, all of which are wholly operated in a cloud-computing environment.


In order to operate with ever improving effectiveness, the team operates in a rich culture of performance and innovation. Collaboration within the team and with business stakeholders is frictionless, with significant independence offered in order to experiment with better ways of conducting technology for business value. Technological and career growth opportunities are a natural and every day part of the working environment.


Please read further to get a better feel for the role and the skills that lead to success in this team.


Job Functions:


·       Analyze system requirements and design responsive algorithms and solutions


·       Use big data and cloud technologies to produce production quality code


·       Engage in performance tuning and scalability engineering


·       Work with team, peers and management to identify objectives and set priorities


·       Perform related SDLC engineering activities like sprint planning and estimation


·       Work effectively in small agile teams


·       Provide creative solutions to problems


·       Identify opportunities for improvement and execute


Essential skills:


·       Experience with cloud based Big Data technologies


·       Proficiency in Hive / Spark SQL


·       Experience with Spark


·       Experience with one or more programming languages like Scala, Python, and/or Java


·       Ability to push the frontier of technology and independently pursue better alternatives


Please apply with your updated resume along with the following details: Full Name, Work Authorization, Current Location, Hourly Rate, Contact details and we will contact you to provide more information.


Thank you


CalSoft Corp


4695 Chabot Dr, Suite#200,


Pleasanton, CA 94588


 


 


 



See full job description

Are you a talented Data Engineer? Do you have a proven background in developing Kafka pipelines and working at the Petabyte scale? Do you want to join one of the world's fastest growing tech labs?One of the world’s blue-chips is looking to scale their AI offering with a focus on NLP and speech technology. They are now looking to recruit three experienced Data Pipeline Engineers. You'll be responsible for building and managing data pipelines and stream processing infrastructure.Other responsibilities will include:· Building and maintaining the core petabyte scale data storage system as a hybrid of Hibernate and Amazon S3.· Building and maintaing Kafka pipelines.· Building pipelines to support new artificial intelligence models.You'll have… · More than 3 years’ commercial experience with skills in Python/Java and JSON· Strong Cluster Computing experience· Applied experience with Amazon S3· Prior experience of building infrastructure at the petabyte scaleBig Cloud is acting as an employment agency in relation to this vacancy.


See full job description

Job Description


 Big Data Engineers -


We have an urgent need to bring in 2 big data engineers for our area.  Skills required are Teradata, Hadoop, MapReduce, Hive, Pig, Data streaming, NoSQL, SQL, and programming.  



See full job description

Overview

We are seeking a Big Data Engineer to support our customer.

 

 

Responsibilities

Designs, modifies, develops, writes and implements software systems. Participates in software and systems testing, validation, and maintenance processes through test witnessing, certification of software, and other activities as directed. Provides support to senior staff on projects/programs. Familiar with standard concepts, practices, and procedures within a variety of fields related to the project. This position takes direction from senior technical leadership.

The Big Data Engineer (BDE) is responsible for building the next generation of web applications and systems focusing on capability delivery to end users. The BDE is a member of a big data team of specialist within the multi-disciplinary agile development team. The BDE will manage requirements collection, software design, development and delivery full lifecycle in support of analysts. The BDE helps manage effective processes associated with the architecture. The BDE collaborates closely with the Agile Software Developer (ASDs), Technical Targeting Developer (TTDs), and the end user analysts to write and implement cutting edge big data algorithms and analytics. The BDE engages in software solution planning and creation to ensure capabilities are delivered using the latest available technologies and methods. The BDE will operate in a RAD/JAD environment in which tasks are rapidly defined and then executed to insure maximum user input, feedback and adoption. The BDE ensures the interoperability of the in-house capability with outside partners.

Qualifications

Minimum Qualifications:

    • 5 years of experience
    • COMPTIA Security+ certification or CISSP certification
    • Proficiency in two or more of the following programming languages: C#, Java, .NET, Python, Perl, Ruby, or similar
    • Familiarity with current Agile methods

  • Proficiency with the following:
    • Multiple operating systems including: UNIX, Linux, Windows, Cisco IOS, etc.
    • Machine learning, data mining, and knowledge discovery
    • Analytic algorithm design and implementation
    • ETL processes; including document parsing techniques
    • Networking, computer, and storage technologies
    • Using or designing RESTful APIs, SOAP, XML
    • Developing large cloud software projects, preferably in Java, Python or C++ language
    • Java/J2EE, multithreaded and concurrency systems
    • Multi-threaded, big data, distributive cloud architectures and frameworks including Hadoop, MapReduce, Cloudera, Hive, Spark, Elasticsearch, etc. for the purposes of conducting analytic algorithm design and implementation
    • NoSQL database such as Neo4J, Titan, Mongo, Cassandra, and hBase
    • AWS Services (EC2, Network, ELB, S3/EBS, etc.)
    • Processing and managing large data sets (multi PB scale)
    • Web services environment and technologies such as XML, KML, SOAP, and JSON
    • Proficiency in trouble-shooting in very complex distributed environments including following stack traces back to code and identifying a root cause


Preferred Qualifications:

  • Education Masters Degree in Computer Science or related field (e.g. Statistics, Mathematics, Engineering) but a technical BS degree will suffice
  • Distributed computing-based certifications
  • Proficiency with the following:
    • Management/tracking utilities such as Jira, Redmine, or similar
    • Running Internet facing or Service Level Agreement (SLA'd) auto-deployed environments
    • Real-time media protocols (Real-time Transport Protocol (RTP), Secure Real-time Transport Protocol (SRTP))
    • Data transfer systems such NiFi
    • Text processing: NPL, NER, entity retrieval (e.g. Solr/Lucene), topic extraction, summarization, clustering, etc.
    • Certification from an Agile certified institute, International Consortium for Agile, Scaled Agile Academy, Scrum Alliance, Scrum.org, International Scrum Institute, ScrumStudy, Project Management Institute - Agile Certified Practitioner, or similar XP/Scrum certification or training is desired
    • Support to SOF; Previous experience with technology, intelligence and cyber under the umbrella of USSOCOM 


Education:

Bachelor of Arts or Bachelor of Science in Computer Science or related fields (e.g. Statistics, Mathematics, Engineering), or equivalent in years of experience, or demonstrates adequate knowledge for the position.

 

Clearance Requirements:        

Must have active TS/SCI clearance

 Physical Demands - The physical demands described here are representative of those that may need to be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

 

While performing the duties of this Job, the employee is regularly required to sit and talk or hear. The employee is frequently required to walk; use hands to finger, handle, or feel and reach with hands and arms. The employee is occasionally required to stand; climb or balance and stoop, kneel, crouch, or crawl. The employee must occasionally lift and/or move up to 20 pounds.

 

HII-MDIS, formerly Fulcrum, Fulcrum is an equal opportunity employer and gives consideration for employment to qualified applicants without regard to race, color, religion, sex, national origin, disability or protected veteran status. EOE of Minorities/Females/Veterans/Disability

 

CJ  *MON*


See full job description

IQuest Solutions Corp was founded in 2004. At our company, you will be a part of a leading information-based technology company. Our success is grounded in our team of talented and creative professionals. When you become a member of our global team professionals, you’ll have the opportunity to fast track your career by leveraging the experience, the vision, and the worldwide scope of a rapidly growing technology services company. To create a dynamic enterprise, we hire the best and smartest individuals and strive to foster an environment of focus, excellence, convergence and growth. We currently have full time position and sponsorship is available for the qualified candidate.


Big Data Software Engineer (Spark/Scala/Java/AWS)


As a Big Data Software Engineer at IQuest Solutions Corp, you’ll be part of a team that’s leading the next wave of disruption at a whole new scale, using the latest distributed computing technologies and operating across large sets of data spanning across AWS usage to other cloud related data.



Day To Day Responsibilities


·        Using Big Data tools (Hadoop, Spark, AWS) to conduct the analysis of petabytes of 

 AWS usage data and other data

·       Writing software to clean and investigate large, messy data sets of numerical data

·        Integrating with external data sources and APIs to discover interesting trends

·        Designing rich data visualizations to communicate complex ideas to customers or  

 company leaders using Tableau or other tools

·        Work directly with Product Owners and end-users to develop solutions in a highly   

 collaborative and agile environment.

·        The ability to own the application end to end and act as an owner


Basic Qualifications


·        Bachelors Degree or Military experience

·        At least 2 years of experience with Java

·        At least 1 years of experience with Scala

·        At Least 2 years of experience with Python

·        At least 2 years of experience with Spark

·        At least 1 year of experience with AWS

·        At least 2 years of experience with SQL



Preferred Qualifications


·        Master’s Degree or 3 years of relevant experience

·        1+ year of experience with Spark or Hadoop

·        1+ year of experience with Kafka, Tableau or Databricks

 

Job Category – Software Engineering, Technology Explorers


See full job description
Previous 1 3 4
Filters
Receive Big Data Engineer jobs in Washington, DC in your inbox.
Receive jobs in your inbox

I agree to Localwise’s Terms & Privacy