Huckleberry Youth Programs (HYP) is a community-based youth agency, providing a full spectrum of services to homeless, runaway, and other at-risk youth and families in San Francisco and Marin counties. Our programs are characterized by a commitment to community-based, collaborative services; attention to the urgency, complexity, and ever-changing nature of our clients’ needs; and recognition of the value and dignity of youth and their capacity to become healthy, productive adults.
The Director of Research and Evaluation oversees the collection, management, and analysis of HYP’s youth services data. This individual must be able to respond thoughtfully and judiciously—drawing on expertise in data systems, social services, and program evaluation—to the needs and goals of a highly adaptive organization with deep roots in the Bay Area. HYP has long-standing programs and services, but prides itself on being responsive to changing needs within the communities it serves. The ideal candidate is able to adapt data systems in kind, recognizing when to leverage existing functionality and when structural changes are needed. Further, this position will support the organization on its continued path toward being more data informed through all levels of operations.
Huckleberry offers competitive salaries and excellent benefits: http://www.huckleberryyouth.org/wp-content/uploads/2019/04/2019-Benefits-Summary.pdf
EQUAL EMPLOYMENT OPPORTUNITY: Huckleberry Youth Programs is an equal opportunity employer, committed to providing equal opportunity to its employees and applicants for employment without discrimination on the basis of race; color; ethnic background; religion; gender; gender identity or expression; sexual orientation; national origin; ancestry; age; marital status; pregnancy, childbirth, or other related medical condition; disability, including HIV- related conditions; or status as a covered veteran. This policy applies to every aspect of employment, including but not limited to: hiring, advancement, transfer, demotion, layoff, termination, compensation, benefits, training and working conditions.
FAIR CHANCE: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
SCA Environmental, Inc. (SCA) is an established environmental consulting firm based in San Francisco, California. We have been in business over 25 years and offer environmental consulting, engineering and remediation services to private, municipal, and government clients throughout California.
We are seeking a full-time Associate Level Environmental Geologist or Engineer to work on environmental and remediation projects. The successful candidate will manage and support projects throughout the Bay Area and Northern California, and will work with clients that include both the public and private sectors such as Public Works Departments, Ports, Transportation clients (Airports/Roads/Transit/Marine), Federal Clients, Caltrans, and private developers. The position requires excellent technical and project management in all areas of environmental site investigation, remediation, and consulting. Exceptional report writing skills and creative problem solving skills are a necessity for the position.
Specific responsibilities of the position include but are not limited to the following:
BENEFITS AND COMPENSATION:
SCA offers an excellent compensation plan, which includes a number of company sponsored benefits available to all regular full-time employees, including: vacation time, paid holidays, medical insurance, life and long-term disability insurance; and 401k. Telecommuting (partial) is possible; however, the position requires a local presence in the Bay Area.
HOW TO APPLY:
To apply, please submit detailed resume, cover letter, and project list to firstname.lastname@example.org. No attachments please.
Qualified candidates will be contacted by email, and requested to provide additional information. Based on a review of these responses, shortlisted candidates will be invited for interviews.
The selected candidate will work with our staff of environmental scientists, civil and mechanical engineers, safety professionals and industrial hygienists and spend approximately 75% of their time performing environmental engineering and/or industrial hygiene based assessments and monitoring involving asbestos and lead. Other types of projects that you will perform include indoor air quality investigations; sampling of air, soil and water; construction monitoring; and evaluation of buildings for hazardous materials.
The position will include both field work and office work, and will require work on construction sites near heavy construction equipment, ability to lift up to 30 pounds and climb ladders, and enter crawl spaces and attics. The selected candidate must also be in good physical health and be able to wear respiratory and other personal protective equipment. The ability to work nights and weekends, which may occur up to 20% of the time, is also required.
To apply, please submit resume and cover letter to email@example.com. Include your resume in the body of the email.
NO ATTACHMENTS PLEASE.
Important: Be sure to reference Job Code IH-1 in the subject line of your email.
SCA is an equal opportunity employer.
Give2Asia is seeking an intern to work with the Programs Department starting January 27, 2020. Interns will work with members of the Programs team on their program development and grantmaking goals. Interns may also have the opportunity to work cross-functionally across departments (i.e. finance, marketing).
The ideal candidate has, or is working towards, a bachelor’s or master’s degree and interest in International Philanthropy and/or Asian affairs; strong research, writing, and editing skills; computer and internet proficiency; excellent interpersonal skills; and a desire to learn.
Candidates must be available for a minimum of 20 hours per week for an approximately three-month period. Proposed start date is January 27, 2020. This is a volunteer position with a small stipend available for meals and transportation.
We are a Fire Investigation firm looking for a Research Associate. We need a person who has high attention to detail, great problem solving and computer skills, is a self starter, quick learner, and can stay calm under fire. If you are a quick witted person who can handle and prioritize multiple demands when things are busy, but can also be self motivated when things are slow, we are looking for you!
SCA Environmental, Inc. is a small environmental consulting firm with two local Bay Area offices. SCA works for many different types of clients, including cities, agencies, high-rise office building owners, banks, the US military, housing developers, non-profit groups, and manufacturing companies.
We currently have the following positions available in our San Francisco office:
Entry Level Environmental Specialist - San Francisco office (Job Code: ESP2-SF)
The selected candidate will work with our staff of environmental scientists, civil and mechanical engineers, safety professionals and industrial hygienists. The successful candidate will spend approximately 75% of their time performing environmental engineering and/or industrial hygiene based assessments and monitoring involving asbestos and lead. Other types of projects that you will perform include indoor air quality investigations; sampling of air, soil and water; construction monitoring; evaluation of buildings for hazardous materials; and historical site assessments.
The position will include approximately 75% field work and 25% office work over the course of the year. Note that SCA will train you in the necessary technical areas, so you do not need to have experience in all areas. The most important things you can bring to the job are a desire to learn, an ability to be flexible, and a willingness to work hard.
Qualifications & Experience:
• Bachelor’s degree preferred (job requires high school level Sciences, all majors welcomed as well as OPT)
• Excellent communication (oral and written) skills
• Excellent organizational skills
• Proficient with MS Office (Word, Excel).
• Must be able to work independently and as part of a team
• Ability to multi-task and work on multiple projects at the same time
• Must be physically able to climb a 20′ ladder, lift up to 50 pounds, enter crawl spaces and attics, and work on construction sites near heavy construction equipment and in outside weather conditions such as wet and/or humid conditions. Work may be conducted in locations where noise, fumes, dust, toxic materials are present.
• Participation in SCA’s Medical Surveillance Program, which requires the selected candidate to maintain a current medical clearance to work and wear respiratory protection
• A reliable car, drivers license, and auto insurance for field work are REQUIRED
• Ability to work nights and weekends, which occurs up to 25% of the time, is also required.
This is an entry-level position. To apply, please submit resume and cover letter to firstname.lastname@example.org. Include your resume in the body of the email. NO ATTACHMENTS PLEASE. Be sure to reference the exact Job Code in the subject line of your email.
No phone calls please.
SCA is an equal opportunity employer.
Sr. Data Engineer We are growing!! Bridg is seeking a Senior Data Engineer who will be architecting highly scalable data integration and transformation platform processing high volume of data under defined SLA. You will be creating and building the platform that includes ingestion and transformation of data, data governance, machine learning, analytics and consumer insights. We’re a rapidly-growing start-up in the heart of Sawtelle Japantown, just blocks away from the 405 and the 10. Our loft-style office houses a passionate, hard-working team of ramen-slurpers, beachcombers, and LA-daydreamers. At our core, we’re a tech company, yes, but our people make the magic happen. At Bridg, you will be solving complex problems with a business that celebrates innovation and values your contributions. We want you to wake up each day excited to use cutting edge tools/technologies and software development practices. Our platform is a combination of a large scale near-real time data pipeline (10s of billions of data points of sale transaction data from major retailers) and over 100 microservices. Our current tech stack includes Flink, Kafka, Cassandra, ElasticSearch, AWS Athena, Glue, Redshift, EMR, DynamoDB, and Java Spring Boot based microservices. A collaborative and innovative culture, Bridg offers highly advanced technology to identify and track purchasers in a physical retail store. Bridg's technology relies on data science and probabilistic modeling to identify and track the purchasers. The Bridg platform gives retail chains the same level of customer insight (and revenue growth) as data-savvy online retailers like Amazon, leveraging hundreds of millions of daily data points. Qualifications3+ years working in Big Data and related technologies5+ years Java and Spring Boot experienceExperience building high-performance, and scalable distributed systemsAWS cloud experience (EC2, S3, Lambda, EMR, RDS, Redshift)Experience in a variety of relevant technologies including Cassandra, AWS DynamoDB, Kafka, AWS Kinesis, Elasticsearch, Machine Learning, Spark, Hadoop, Hive, PrestoExperience in ETL and ELT workflow managementFamiliarity with AWS Data and Analytics technologies such as Glue, Athena, Redshift, Spectrum, Data PipelineMS preferred, or BA/BS degree in computer science, related field, or equivalent practical experienceExperience with Agile methodologies preferredExperience at start-ups a plusExperience in a fast paced development environment idealMust pass background checkMust be able to work in the office full time.Must be able to reliably commute to the office daily. What we offerA fantastic opportunity to be part of a growing start-up. A chance to work with a passionate, driven and fun team.An incredible work environment fun, casual and fast-pacedMonthly team activities and outingsLoft-style office with plenty of break-out spaceFully stocked company snack area complete with every drink and snack your heart desiresGreat benefits Health, Dental, Vision, and VacationSecurity, Availability and Confidentiality RequirementsYou are responsible for protecting the credentials provided to you to access S3’s (and customer, where applicable) networks, systems and dataYou are responsible for maintaining the confidentiality of all S3’s customer data to which you are granted access. Any suspected compromises of S3’s proprietary data or customer data must be reported to Management immediately.You will adhere to the S3 Information Security Policy and Procedures and supporting standard operating procedures to protect Company systems and data.Respond to and resolve customer help desk requests [varies based on role]You will alert management immediately with any expected system or data compromises and/or system failure impacting the security, confidentiality, availability and integrity of S3 and customer data About Bridg: Bridg is a marketing software company that provides a CRM solution, email and SMS marketing, insights and analytics, mobile app and loyalty program development for restaurants and retailers. Powered by transaction data, Bridg builds unique 360º customer profiles to understand individualized behavior patterns, providing clients with deep data science used to create wide-reaching, effective personalized marketing campaigns that drive traffic and sales in a measurable way. Our headquarters is located in West LA / Santa Monica, and we offer competitive salaries, great benefits, and a high-energy environment with lots of room for personal and professional growth.
Principal Data Engineer - Big Data Principal Data Engineer - Big Data - Skills Required - SPARK, EMR, Big Data, Hadoop, Data Lakes, MongoDB, AWS, Kinesis, Oracle, Flink
If you are a Principal Data Engineer with experience, please read on!
We are looking for a Lead Data Engineer that can both architect and lead efforts around our expanding data pipeline infrastructure. Our product has most recently been centered in the CRM space, but we are looking to expand far beyond that. Currently, we process millions of data points through multiple data pipelines to feed a suite of datastores and applications. We are preparing for +10x growth both in the volume of data processed and the speed in which that data can be available and actionable. To accomplish this we are looking for someone who can architect and lead the effort to build out highly scalable data solutions.
If you are highly motivated, super passionate about democracy, and want to join a close-knit team that is looking to build great things together, then this position may be for you. This is a full-time role in Raleigh-Durham, NC; Washington, DC; New York City, NY; Sacramento, CA; Austin, TX; Salt Lake City, UT or Chicago, IL.
Top Reasons to Work with Us
Competitive Base Salary
Full Medical/Health Benefits
Generous Vacation, Paid Holidays & PTO
Fun, Exciting, Positive Culture!
What You Will Be Doing
Lead efforts to design, build, scale, and maintain multiple data pipelines
Architect highly scalable data solutions
Be a technical thought leader within the org
Work closely with business owners and external stakeholders to provide actionable data
Ensure data accuracy and reliability
What You Need for this Position
Experience building large scale streaming and batch data pipelines
Experience using Big Data technologies (Spark, Flink, EMR, hadoop, data lakes, etc.)
Mastery of multiple databases (e.g. MongoDB, MySQL, etc.)
Understanding of data security best practices
AWS data technologies (e.g. Kinesis, Glue, RDS, Athena, Redshift, etc.)
Experience building out data warehouse infrastructure
DevOps or System Admin experience
Data Science exploration and modeling
So, if you are a Principal Data Engineer - Big Data with experience, please apply today!
Applicants must be authorized to work in the U.S.
CyberCoders, Inc is proud to be an Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.
Principal Data Engineer - Big DataCA-SacramentoNV1-1552180
Social Media company has a newly created position available. Excellent salary and benefits Work location is downtown DC - metro accessible.
We’re looking to hire a Data Engineer to help to further develop our analytics platform for audience and campaign data. The ideal candidate has a passion for data and big data technology, and will be joining a team of seasoned data engineers to take that passion to the next level!
How You Can Make An Impact:
BIG DATA HADOOP ENGINEER / MACHINE LEARNING ENGINEER-112727 1 year contractThe Machine Learning Engineer shall lead the ML Modeling and Engineering team. The consultant will play the role of technical lead and provide professional services to support the long term IT strategy and planning to include high level analysis, professional reports and presentations, and mentoring, support and training.DeliverablesThe tasks for the Machine Learning Engineer include, but are not limited to, the following:Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.Design, build and scale Machine Learning systems across multiple domains.Design and implement an integrated Big Data platform and analytics solution esign and implement data collectors to collect and transport data to the Big Data Platform.Technical Knowledge and Skills:Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.Strong Hands-on Experience in building, deploying and productionizing ML models using software such as SparkMLLib, TensorFlow, PyTorch, Python Scikit-learn etc. is mandatoryAbility to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatoryStrong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is mustDesign and implement an integrated Big Data platform and analytics solutionDesign and implement data collectors to collect and transport data to the Big Data Platform.4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.4-5 years of programming experience in Java, Scala, Python. Proficient inSQL and relational database design and methods for data retrieval.Knowledge ofNoSQL systems like HBase or CassandraHands-on experience inCloudera Distribution 6.xHands-on experience in creating, indexingSolr collections in Solr Cloud environment.Hands-on experience building data pipelines usingHadoop components Sqoop, Hive, Solr, MR, Spark, Spark SQL.Must have experience with developingHive QL, UDF’s for analyzing semi structured/structured datasets.Must have experience withSpring frameworkHands-on experience ingesting and processing various file formats likeAvro/Parquet/Sequence Files/Text Files etc. Hands-on experience working in Real-Time analytics likeSpark/Kafka/StormExperience with Graph Databases likeNeo4J, Tiger Graph, Orient DBMust have working experience in the data warehousing and Business Intelligence systems.Expertise inUnix/Linux environment in writing scripts and schedule/execute jobs. Successful track record of building automation scripts/code usingJava, Bash, Python etc. and experience in production support issue resolution process. Strong Experience withData Science Notebooks like RStudio,Jupyter, Zeppelin, PyCharm etc.Preferred Skills:Strong SQL skillsJava, Spring, Scala, Cloudera Hadoop, MLLib, Spark, HBase, Neo4j, Solr, Python, Machine Learning, Data Science Notebooks
B.E/B.Tech in Computer Science or related technical degree
Do you view data as an art and a science? So do we.
Avanade leads in providing creative digital services, business solutions and design-led experiences for its clients, delivered through the power of people and the Microsoft ecosystem. Our professionals combine technology, business, and industry expertise to build and deploy solutions to realize results for clients and their customers. Avanade has 34,000 digitally connected people across 24 countries, bringing clients the best thinking through a collaborative culture that honors diversity and reflects the communities in which we operate. We welcome all and seek talented individuals who can bring their whole self to work, build inclusive teams and encourage diversity inside and outside the organization. Majority owned by Accenture, Avanade was founded in 2000 by Accenture LLP and Microsoft Corporation.
How we support you:
We believe in gender equity and an inclusive community. We offer a comprehensive benefits package: generous vacation allowance disability coverage, retirement plans, paid maternity and paternity leave, life insurance, hotel and travel discounts, extended benefits to cover items that support your well-being, health, dental and vision insurance, professional development and paid Microsoft certification opportunities.
You draw on your considerable experience in bringing data and statistics to life to solve sometimes complex problems, and you're comfortable looking after several projects at once. You're able to make your own decisions while at the same time supporting more junior team members.
As a Big Data/PySpark Engineer at Avanade, you will have a deep understanding of the architecture, performance characteristics and limitations of modern storage and computational frameworks, with experience implementing solutions that leverage: HDFS/Hive; Spark/MLlib; Kafka, etc. You will have knowledge in Apache Spark and/or Python programming, deep experience in developing data processing using PySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations. Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load into target data destinations. Knowledge of python packaging, azure/data lake, and data bricks. Experience with ML. Manage large volumes of structured and unstructured data and extract & clean data to make it amenable for analysis. 80% travel is required.
Azure or AWS
Key Role Responsibilities:
Demonstrated expertise working with and maintaining open-source data analysis platforms, including but not limited to:
Pandas, Scikit-Learn, Matplotlib, TensorFlow, Jupyter and other Python data tools
Spark (PySpark), HDFS, Kafka and other high-volume data tools
SQL and NoSQL storage tools, such as MySQL, Postgres, Cassandra, MongoDB and ElasticSearch.
Deep understanding of the architecture, performance characteristics and limitations of modern storage and computational frameworks, with experience implementing solutions that leverage: HDFS/Hive; Spark/MLlib; Kafka, etc.
Technical background in computer science, data science, machine learning, artificial intelligence, statistics or other quantitative and computational science
Hands-on experience using Big Data and statistical analysis tools such as Hadoop/Spark, SQL
Experience transforming Data at a scale using Spark/PySpark
Working experience with Linux OS (Redhat/Ubuntu)
Excellent communication skills (speaking, presenting)
Preferred Years of Work Experience:
You likely have about 2-6 + years of relevant professional experience.
· As a AWS Big Data Consultant, work with implementation teams from concept to operations, providing deep technical subject matter expertise for successfully deploying large scale data solutions in the enterprise, using modern data/analytics technologies on premise and cloud
· Create detailed target state technical, security, data and operational architecture and design blueprints incorporating modern data technologies and cloud data services demonstrating modernization value proposition
· Build, test and deploy solutions using cloud data services
· Provide business analysis and programming expertise within an assigned business unit/area in the analysis, design, and development of business applications.
· Participate in business and IT project estimation activities. Provide technical leadership for small to medium-scale projects. Utilize business knowledge to collaborate and offer technical solutions.
· Be self-guided and complete tasks with minimum assistance.
BIG DATA ENGINEER
-Gatherand process raw data at scale (including writing scripts, web scraping, callingAPIs, write SQL queries, etc.).
-Workclosely with engineering, sysops and devops teams to integrate data algorithmsand code into productionalized systems.
-Processunstructured data into a form suitable for analysis .
-Supportbusiness decisions with ad hoc analysis.
-Significantexperience with Elastic Map Reduce. Experience with Hortonworks or ClouderaHadoop distributions may also be acceptable.
-Familiaritywith some or all AWS data services:RDS, DyanmoDB, DataPipeline, Quick Sight,Elasticache, Kinesis, S3
Sogeti is a leading provider of professional technologyservices, specializing in Application Management, Infrastructure Management andHigh-Tech Engineering. Sogeti offers cutting-edge solutions around Testing,Business Intelligence, Mobility, Cloud and Security, combining world classmethodologies and the global delivery model, Rightshore. Sogeti bringstogether more than 20,000 professionals in 15 countries and is present in over100 locations in Europe, the US and India. Sogeti is a wholly-owned subsidiaryof Cap Gemini S.A., listed on the Paris Stock Exchange.
At Sogeti USA, we are committed to building a long and enduringrelationship with our employees and to creating an environment that rewards andempowers. Our mission is to constantly exceed our employees' expectations inthe same way that we strive to exceed our clients' expectations. We offer anenvironment that celebrates innovation and helps you to achieve a good balancebetween your professional and personal life. We strive to be an employer ofchoice.
The benefits our employees enjoy:
401(k) Savings Plan- Matched150% up to 6%. (Our 401k is in the top 1% of 401(k) plans offered in theUS!)
Bonus Program that pays up to$24K!
$12,000 in TuitionReimbursement
100% Company-paid mobile phoneplan
Personal Time Off (PTO)-Ensuring a balance of work and home life
Sogeti is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
Our client is seeking a resourceful Software Engineer (Big Data Analytics) to help build the next-generation analytics platform. You'll create our cloud (AWS) based data processing pipeline, data processing infrastructure, and internal APIs. Our platform needs to support multiple types of scalability challenges - from handling terabytes of data to providing analytics in real-time. This is a critical role that will have a significant impact on the product direction, technology and business.
Job Title : Sr. Big Data Engineer
Should have 10 years of development experience with atleast 3 years as a Data Engineer with 1 year experience in AWS cloud.
Big Data Engineers serve as the backbone of the Strategic Analytics organization, ensuring both the reliability and applicability of the teams data products to the entire Samsung organization. They have extensive experience with ETL design, coding, and testing patterns as well as engineering software platforms and large-scale data infrastructures. Big Data Engineers have the capability to architect highly scalable end-to-end pipeline using different open source tools, including building and operationalizing high-performance algorithms.
Big Data Engineers understand how to apply technologies to solve big data problems with expert knowledge in programming languages like Java, Python, Linux, PHP, Hive, Impala, and Spark. Extensive experience working with both 1) big data platforms and 2) real-time / streaming deliver of data is essential.
Big data engineers implement complex big data projects with a focus on collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable deliverables across customer-facing platforms. They have a strong aptitude to decide on the needed hardware and software design and can guide the development of such designs through both proof of concepts and complete implementations.
Role and Responsibilities
Translate complex functional and technical requirements into detailed design.
Design for now and future success
Hadoop technical development and implementation.
Loading from disparate data sets. by leveraging various big data technology e.g. Kafka
Pre-processing using Hive, Impala, Spark, and Pig
Design and implement data modeling
Maintain security and data privacy in an environment secured using Kerberos and LDAP
High-speed querying using in-memory technologies such as Spark.
Following and contributing best engineering practice for source control, release management, deployment etc
Production support, job scheduling/monitoring, ETL data quality, data freshness reporting
7+ years of Python or Java/J2EE development experience
3+ years of demonstrated technical proficiency with Hadoop and big data projects
5-8 years of demonstrated experience and success in data modeling
Fluent in writing shell scripts [bash, korn]
Writing high-performance, reliable and maintainablecode.
Ability to write MapReduce jobs
Ability to setup, maintain, and implement Kafka topics and processes
Skills and Qualifications
Samsung Electronics America, Inc. is committed to employing a diverse workforce, and provides Equal Employment Opportunity for all individuals regardless of race, color, religion, gender, age, national origin, marital status, sexual orientation, gender identity, status as a protected veteran, genetic information, status as a qualified individual with a disability, or any other characteristic protected by law.
Samsung Electronics is a global leader in technology, opening new possibilities for people everywhere. Through relentless innovation and discovery, we are transforming the worlds of TVs, smartphones, wearable devices, tablets, digital appliances, and network systems, and the entire semiconductor industry with our memory, system LSI, foundry, and LED solutions. Samsung is also leading in the development of the Internet of Things through, among others, our Smart Home and Digital Health initiatives.
Since being established in 1969 , Samsung Electronics has grown into one of the worlds leading technology companies, and become recognized as one of the top global brands. Our network now extends across the world, and Samsung takes great pride in the creativity and diversity of its talented people, who drive our growth. To discover more, please visit our official newsroom at ( https://news.samsung.com/global/ ).
We believe the software world is embracing cloud in an exponential speed, and any company that doesn’t have a cloud strategy will not be relevant in 5 years. Furthermore, any software company addressing data problems that does not have a cloud-native offering TODAY will lose the battle tomorrow. This position is the vital role to building that strategic intellectual property for Promethium.
What we want from you:
Ability to architect, prototype and build big data solution on top of AWS Bigdata products (later on Azure)
Relentless focus on the customer and business needs
Great communication skills, ability to work proactively and collaboratively
What you can get from this opportunity:
You will have the opportunity to work on the state-of-the-art cloud data processing technology.
You will own the cloud big data architecture solution .
You will work with a friendly and extremely technical engineering team to build a long lasting engineering legend
8+ years experience architecting and implementing big data solutions
(Must have) 3+ years hands-on experience on major AWS Bigdata product such as Glue, EMR, Athena, Kinesis
Deep understanding to the architecture toward AWS Bigdata product such as Glue, EMR, Athena, Kinesis
Decent understand toward security related to big data solution is huge plus (particularly IAM)
Proficient in one of the languages in Python, Java or Scala
Effective communication, presentation, and collaboration skills is required
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient is on a mission to help the Healthcare industry take advantage of modern data and analytics architectures, tools, and patterns to improve the quality and affordability of care. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Perficient currently has a career opportunity for a Data Developer in their market leading Data Solutions team.
Must be local to the Boston metro and/or willing to relocate to the Boston metro.
As a Data Developer you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment and support of application developed for our clients. As a member working in a team environment you will work with solution architects and developers on interpretation/translation of wireframes and creative designs into functional requirements, and subsequently into technical design.
Lead the technical planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver.
Serve as a technical lead and mentor. Provide technical support or leadership in the development and continual improvement of service.
Develop and maintain effective working relationships with team members.
Demonstrate the ability to adapt and work with team members of various experience level.
SQL Proficiency is a must.
Proficiency with Spark withJava required
Knowledge of ETL and Stored Procedures in a major EDW (Netezza, Oracle, or Teradata preferred) platform is a must.
Experience migrating workloads from traditional data warehouse architectures to Hadoop is preferred.
Knowledge of data formats and ETL and ELT processes in a Hadoop environment including Hive, Parquet, MapReduce, YARN, HBase and other NoSQL databases.
Experience in dealing with structured, semi-structured and unstructured data in batch and real-time environments.
Experience with working in AWS environments including EC2, S3, Lambda, RDS, etc. Familiarity with DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira and Confluence.
Client facing or consulting experience highly preferred.
Skilled problem solvers with the desire and proven ability to create innovative solutions.
Flexible and adaptable attitude, disciplined to manage multiple responsibilities and adjust to varied environments.
Future technology leaders- dynamic individuals energized by fast paced personal and professional growth.
Phenomenal communicators who can explain and present concepts to technical and non-technical audiences alike, including high level decision makers.
Bachelors Degree in MIS, Computer Science, Math, Engineering or comparable major.
Solid foundation in Computer Science, with strong competencies in data structures, algorithms and software design.
Knowledge and experience in developing software using agile methodologies.
Proficient in authoring, editing and presenting technical documents.
Ability to communicate effectively via multiple channels (verbal, written, etc.) with technical and non-technical staff.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
External Company Name: Perficient, Inc
External Company URL: www.perficient.com
Street: One Beacon Street, 15th Floor
Why American Express?
Theres a difference between having a job and making a difference.
American Express has been making a difference in peoples lives for over 160 years, backing them in moments big and small, granting access, tools, and resources to take on their biggest challenges and reap the greatest rewards.
Weve also made a difference in the lives of our people, providing a culture of learning and collaboration, and helping them with what they need to succeed and thrive. We have their backs as they grow their skills, conquer new challenges, or even take time to spend with their family or community. And when theyre ready to take on a new career path, were right there with them, giving them the guidance and momentum into the best future they envision.
Because we believe that the best way to back our customers is to back our people.
The powerful backing of American Express.
Dont make a difference without it.
Dont live life without it.**You wont just shape the world of software.
Youll shape the world of life, work and play.
Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every day. Today, innovative ideas, insight and new perspectives are at the core of how we create a more powerful, personal and fulfilling experience for all our customers. So if youre interested in a career creating breakthrough software and making an impact on an audience of millions, look no further.
*You wont just keep up, youll break new ground. *
There are hundreds of opportunities to make your mark on technology and life at American Express. Heres just some of what youll be doing: * Taking your place as a core member of an agile team driving the latest development practices * Writing code and unit tests, working with API specs and automation * Identifying opportunities for adopting new technologies * Perform technical aspects of software development for assigned applications including design, developing prototypes, and coding assignments * Function as a leader on an agile team by contributing to software builds through consistent development practices (tools, common components, and documentation) * Lead code reviews and automated testing * Debug software components and identify code defects for remediation * Leads the deployment, support, and monitoring of software across test, integration, and production environments. * Empower teams to automate deployments in test or production environments * Empower teams to automatically scale applications based on demand projections
Leadership * Takes accountability for the success of the team achieving their goals * Drives the teams strategy and prioritizes initiatives * Influence team members by challenging status quo, demonstrating risk taking, and implementing innovative ideas * Be a productivity multiplier for your team by analyzing your work flow and contributing to enable the team to be more effective, productive, and demonstrating faster and stronger results. * Mentor and guide team members to success within the team
Are you up for the challenge? * Bachelors Degree in computer science, computer science engineering, or related experience required; advanced degree preferred * Experience with Agile or other rapid application development methods and tools preferably Agile Scrum and SAFE Agile plus Agile Tools like Rally/JIRA * Ability to effectively interpret technical and business objectives and challenges and articulate solutions * Willingness to learn new technologies and exploit them to their optimal potential * 7 years of wide breath of engineering experience in application design, software development, automated testing, and production support in a professional environment and/or comparable experience such as: o Demonstrated experience leading teams of engineers o Experience with API development o Experience with Web Services, Cloud Integration and Development o Hands-on expertise with application design and software development in Big Data across one or more platforms, languages, and tools (e.g. Java, J2EE, Big Data Components/ Frameworks Hadoop, HBase, MapReduce, HDFS, Pig, Hive, Python, Spark, Spring Boot, Elasticsearch, etc.) o Experience with distributed (multi-tiered) systems, and relational databases o Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Subversion, code/security review tools o Experience with Dev Ops, CI/CD and Automated testing tools such Sonarqube, Jenkins, JMeter, etc. o Familiarity with machine learning techniques and algorithm such as Regression, Clustering, Random Forest, Time Series Forecasting, etc.At the core of Software Engineering * Every member of our team must be able to demonstrate the following technical, functional, leadership and business core competencies, including: * Agile Practices * Porting/Software Configuration * Programming Languages and Frameworks * Business Analysis * Analytical Thinking * Business Product KnowledgeEmployment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions.
Title: Senior Big Data Java Engineer
Requisition ID: 19018914
Robert Half Technology is looking for a Data Engineer to work for their client in Santa Monica that is a Tech Ad Agency. The Data Engineer candidate should have at least three years of Python experience and has working knowledge in SQL. The responsibilities for the Data Engineer position includes scripting in Python, managing large data sets, and analyzing the data in the tech industry. This is a great opportunity to learn new data engineer skills on the job and be part of one of fastest companies in the LA market.
If you are interested in this opportunity, please call Vice President - Division Director Jimmy Escobar at (310) 209-6838 or Jimmy.Escobar@RHT.com - https://www.linkedin.com/in/jimmyescobar/
*Python - MUST
Technology doesn't change the world. People do.
As a technology staffing firm, we can't think of a more fitting mantra. We're extreme believers in technology and the incredible things it can do. But we know that behind every smart piece of software, every powerful processor, and every brilliant line of code is an even more brilliant person.
Leader among IT staffing agencies
The intersection of technology and people it's where we live. Backed by more than 65 years of experience, Robert Half Technology is a leader among IT staffing agencies. Whether you're looking to hire experienced technology talent or find the best technology jobs, we are your IT expert to call.
We understand not only the art of matching people, but also the science of technology. We use a proprietary matching tool that helps our staffing professionals connect just the right person to just the right job. And our network of industry connections and strategic partners remains unmatched.
Apply for this job now or contact our branch office at 888-490-4429 to learn more about this position.
All applicants applying for U.S. job openings must be authorized to work in the United States. All applicants applying for Canadian job openings must be authorized to work in Canada.
2019 Robert Half Technology. An Equal Opportunity Employer M/F/Disability/Veterans.
Salary: $52.25 - $60.50 / Hourly
Location: Santa Monica, CA
Date Posted: November 30, 2019
Employment Type: Temporary
Job Reference: 00320-0011280753
Staffing Area: Technology
The Big Data Engineer is responsible for building scalable data platforms, and large-scale processing systems that enable advanced analytics and support data teams across the enterprise. This hands-on role requires some experience working with AWS cloud platform in addition to expertise in a variety of technologies. The Big Data Engineer will develop and manage the enterprise data warehouse while sourcing data from various databases/applications & web APIs using stream and batch processing architectures.
Role Specific Skills
These are the professional skills we would expect from an individual fully established in this role.
Please note, this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities, and schedule may change at any time with or without notice.
SMS Assist is an Equal Opportunity Employer (EOE) that welcomes and encourages all applicants to apply regardless of age, race, color, religion, sex, sexual orientation, gender identify and/or expression, national origin, disability, veteran status, marital or parental status, ancestry, citizenship status, pregnancy or other reasons prohibited by law.
Essential Duties & Responsibilities;
· Be able to effectively communicate with both technical and business stakeholders for requirements analysis and will be highly proficient in the technical architecture, design, and development of database design.
· Provide Project, ITE, ODTR and Production support including analyzing incidents and identifying the root cause.
· Migrating data and related functionality from legacy systems to modernized solutions.
· Developing and managing data processes to ensure that data is available and usable
· Creating data platforms, integration architectures, and pipelines
· Managing and monitoring data via automated testing frameworks (Data-Driven Testing, TDD, etc.)
· Ensuring that data is consistently available and of sufficient quality to be considered fit for use
· Design and perform all activities related to big data architecture components between environments during development and deployment.
· Work with Business Analysts and leads to transition the functional understanding of development assignments to themselves and developers they supervise.
· Working closely with data architects, data scientists, and data visualization developers to design, build, test, deliver, and maintain sustainable and highly scalable data solutions
· Supervise work of Big Data Developer including assigning work, providing technical direction and performing code reviews
Knowledge, Skills, and Abilities;
· Experience with Big Data technologies includingHadoop, Hive, Spark.
· Exhibit exceptional technical skills in database architecture, database design, and ETL.
· Displays knowledge of the proper way to adhere to the Software Development Life Cycle (SDLC).
· Demonstrate tool expertise in front end and backend tools.
· Experience in SQL-based technologies
· Excellent analytical and problem-solving skills to quickly recognize, isolate, and resolve technical problems.
· Experience in Architecting big data
· Hands-on experience with implementation and support of a business intelligence reporting suite.
· Understand business requirements and able to create/propose solutions.
· Ability to work independently, prioritize tasks appropriately and adapt quickly to project changes. Education, Experience,& Certifications
· Minimum of 5-7+ years of experience.
· Bachelor’s degree in Engineering, Mathematics, Computer Science, Information Systems, Economics or Business, or equivalent.
Big Data Engineer
Our direct client, a fast-growing software and data analytics firm in Greenwich, is seeking to bring on a mid to senior level Big Data Engineer. You will be part of the team to design and implement solutions integrated into client’s analytics Hadoop/EMR, Spark and Elasticsearch systems. The ideal candidate is an experienced application developer with a concentration in data pipeline building, automation, data warehousing and data modeling. The successful candidate will be responsible for expanding and optimizing data, analytics and data pipeline architecture, as well as optimizing data flow and building out a semantic layer to support cross functional teams. The Data Engineer will support software developers, business analysts and data scientists on data and computational initiatives and will ensure optimal data architectures and patterns are consistent throughout ongoing projects.
Big Data Engineer
Location: Herndon, VA
CMCI provides superior management consulting and IT services that empower enterprises to achieve their business goals in today's highly competitive market. Our goal is to seamlessly integrate into each customer's organization in order to fully understand their business and technology needs. This approach allows us to quickly deliver superior quality solutions while achieving the highest level of customer satisfaction on time and within budget. By choosing CMCI, you are choosing a company that can deliver on business outcomes and mission needs in the most cost-effective manner and without sacrificing capability. As a part of CMCI's culture of loyalty and commitment to its employees, CMCI is committed to provide a tremendous career path by promoting employees to their highest potential.
· Perform development and maintenance of end-user focused, object-oriented, data-driven analytic applications to support CBP threat analysis and targeting.
· Independently identify technical solutions for business problems, directly contribute to conceptual design, and routinely collaborate with Enterprise/Application architects, Database Architects, Data Scientists, and mission stakeholders.
· Develop new code, modify existing application code, conduct unit and system testing, and engage in rigorous documentation of developed and delivered application use cases, data flows, and functional operations.
· Integrate with, and materially contribute to, project portfolio teams as a matrixed resource to provide development and issue resolution expertise in collaboration with data scientists, intelligence analysts, developers, and other participants at the direction of a project manager.
· Demonstrated expertise in Java and Object-Oriented Design and development principles.
· Experience with Java 8+ and the newer language features.
· Experience with the Hadoop eco system, including HDFS, YARN, Hive, Pig, and batch-oriented and streaming distributed processing methods such as Spark, Kafka, or Storm
· Experience with distributed data/computing tools such as Hadoop, Spark, Impala, etc.
· Experience with one or more relational database systems such as Oracle, MySQL, Postgres, etc.
· Experience with SQL.
· Experience with one or more build tools such as Maven, Gradle, etc.
· Software Configuration Management (SCM) using Git.
· Comfortable with Linux.
· High level of self-motivation, desire to deliver stellar solutions and willingness to work in a distributed team environment.
Desired Knowledge and Experience
· Experience delivering solutions using Amazon Web Services (AWS EC2, RDS, S3)
· Experience with distributed search engines like Elasticsearch or Solr
· NoSQL database systems such as Cassandra, MongoDB, DynamoDB, etc.
· Familiarity with Atlassian tools such as Jira, BitBucket
· Continuous integration with Jenkins or Bamboo.
Clearance Requirement: We are looking for candidates with either CBP clearance or DOD top sec clearance.
· Master’s degree in computer science or related field
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, d
Big Data Engineer
Global Leader in eCommerce
A globally-renowned eCommerce business is looking for a Big Data Engineer to join their team in the Greater Boston Area. If you are ambitious, want to join a centralized, data-driven organization and have a solid background in leading big data-driven engineering teams then we want to speak to you. This is the perfect role for someone who enjoys a fast-paced environment using the newest technologies available.
THE ROLE - Big Data Engineer
YOUR SKILLS AND EXPERIENCE
You will receive a total compensation of up to $160k, industry-leading benefits, a strong bonus package, the opportunity to grow into a management role and much more.
HOW TO APPLY
Please register your interest by sending your CV to Tommy Daughtrey via the Apply link on this page.
Our direct client is looking for a Big Data Engineer. The position is based out of Rockville, MD. Multiple Location(s): Washington, D.C. Metro Area and New York City
Job Title: Big Data Engineer
Location: Rockville, MD may have additional location of Reston, VA. Multiple Location(s): Washington, D.C. Metro Area and New York City
Experience levels: Accepting Junior 4-6 yrs, Mid 8-10 yrs, Senior 10+ years
Duration: Long term. After 6 months, client will hire candidates who are open for full-time permanent.
Work Authorization: US Citizen, GC, EAD, OPT, H1B, etc
Number of openings: 40
Start Date: Immediate
Local candidates phone and On-Site.
Non- Local candidates’ phone and Video
In the Market Regulation Surveillance Patterns technology team, an expert in this role is vested with responsibilities arising from FINRA’s mission to protect the integrity of US Securities Capital markets. FINRA has a portfolio of ‘surveillance patterns’ that look for manipulative and non-compliant behavior in the database of all transactions that occur in the stock market. The database itself consists of tens of billions of records per day. Engineers work with these large volumes of data using state-of-the-art and industry standard technologies, all of which are wholly operated in a cloud-computing environment.
In order to operate with ever improving effectiveness, the team operates in a rich culture of performance and innovation. Collaboration within the team and with business stakeholders is frictionless, with significant independence offered in order to experiment with better ways of conducting technology for business value. Technological and career growth opportunities are a natural and every day part of the working environment.
Please read further to get a better feel for the role and the skills that lead to success in this team.
· Analyze system requirements and design responsive algorithms and solutions
· Use big data and cloud technologies to produce production quality code
· Engage in performance tuning and scalability engineering
· Work with team, peers and management to identify objectives and set priorities
· Perform related SDLC engineering activities like sprint planning and estimation
· Work effectively in small agile teams
· Provide creative solutions to problems
· Identify opportunities for improvement and execute
· Experience with cloud based Big Data technologies
· Proficiency in Hive / Spark SQL
· Experience with Spark
· Experience with one or more programming languages like Scala, Python, and/or Java
· Ability to push the frontier of technology and independently pursue better alternatives
Please apply with your updated resume along with the following details: Full Name, Work Authorization, Current Location, Hourly Rate, Contact details and we will contact you to provide more information.
4695 Chabot Dr, Suite#200,
Pleasanton, CA 94588
Are you a talented Data Engineer? Do you have a proven background in developing Kafka pipelines and working at the Petabyte scale? Do you want to join one of the world's fastest growing tech labs?One of the world’s blue-chips is looking to scale their AI offering with a focus on NLP and speech technology. They are now looking to recruit three experienced Data Pipeline Engineers. You'll be responsible for building and managing data pipelines and stream processing infrastructure.Other responsibilities will include:· Building and maintaining the core petabyte scale data storage system as a hybrid of Hibernate and Amazon S3.· Building and maintaing Kafka pipelines.· Building pipelines to support new artificial intelligence models.You'll have… · More than 3 years’ commercial experience with skills in Python/Java and JSON· Strong Cluster Computing experience· Applied experience with Amazon S3· Prior experience of building infrastructure at the petabyte scaleBig Cloud is acting as an employment agency in relation to this vacancy.
Big Data Engineers -
We have an urgent need to bring in 2 big data engineers for our area. Skills required are Teradata, Hadoop, MapReduce, Hive, Pig, Data streaming, NoSQL, SQL, and programming.
We are seeking a Big Data Engineer to support our customer.
Designs, modifies, develops, writes and implements software systems. Participates in software and systems testing, validation, and maintenance processes through test witnessing, certification of software, and other activities as directed. Provides support to senior staff on projects/programs. Familiar with standard concepts, practices, and procedures within a variety of fields related to the project. This position takes direction from senior technical leadership.
The Big Data Engineer (BDE) is responsible for building the next generation of web applications and systems focusing on capability delivery to end users. The BDE is a member of a big data team of specialist within the multi-disciplinary agile development team. The BDE will manage requirements collection, software design, development and delivery full lifecycle in support of analysts. The BDE helps manage effective processes associated with the architecture. The BDE collaborates closely with the Agile Software Developer (ASDs), Technical Targeting Developer (TTDs), and the end user analysts to write and implement cutting edge big data algorithms and analytics. The BDE engages in software solution planning and creation to ensure capabilities are delivered using the latest available technologies and methods. The BDE will operate in a RAD/JAD environment in which tasks are rapidly defined and then executed to insure maximum user input, feedback and adoption. The BDE ensures the interoperability of the in-house capability with outside partners.
Bachelor of Arts or Bachelor of Science in Computer Science or related fields (e.g. Statistics, Mathematics, Engineering), or equivalent in years of experience, or demonstrates adequate knowledge for the position.
Must have active TS/SCI clearance
Physical Demands - The physical demands described here are representative of those that may need to be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
While performing the duties of this Job, the employee is regularly required to sit and talk or hear. The employee is frequently required to walk; use hands to finger, handle, or feel and reach with hands and arms. The employee is occasionally required to stand; climb or balance and stoop, kneel, crouch, or crawl. The employee must occasionally lift and/or move up to 20 pounds.
HII-MDIS, formerly Fulcrum, Fulcrum is an equal opportunity employer and gives consideration for employment to qualified applicants without regard to race, color, religion, sex, national origin, disability or protected veteran status. EOE of Minorities/Females/Veterans/Disability
IQuest Solutions Corp was founded in 2004. At our company, you will be a part of a leading information-based technology company. Our success is grounded in our team of talented and creative professionals. When you become a member of our global team professionals, you’ll have the opportunity to fast track your career by leveraging the experience, the vision, and the worldwide scope of a rapidly growing technology services company. To create a dynamic enterprise, we hire the best and smartest individuals and strive to foster an environment of focus, excellence, convergence and growth. We currently have full time position and sponsorship is available for the qualified candidate.
Big Data Software Engineer (Spark/Scala/Java/AWS)
As a Big Data Software Engineer at IQuest Solutions Corp, you’ll be part of a team that’s leading the next wave of disruption at a whole new scale, using the latest distributed computing technologies and operating across large sets of data spanning across AWS usage to other cloud related data.
Day To Day Responsibilities
· Using Big Data tools (Hadoop, Spark, AWS) to conduct the analysis of petabytes of
AWS usage data and other data
· Writing software to clean and investigate large, messy data sets of numerical data
· Integrating with external data sources and APIs to discover interesting trends
· Designing rich data visualizations to communicate complex ideas to customers or
company leaders using Tableau or other tools
· Work directly with Product Owners and end-users to develop solutions in a highly
collaborative and agile environment.
· The ability to own the application end to end and act as an owner
· Bachelors Degree or Military experience
· At least 2 years of experience with Java
· At least 1 years of experience with Scala
· At Least 2 years of experience with Python
· At least 2 years of experience with Spark
· At least 1 year of experience with AWS
· At least 2 years of experience with SQL
· Master’s Degree or 3 years of relevant experience
· 1+ year of experience with Spark or Hadoop
· 1+ year of experience with Kafka, Tableau or Databricks
Job Category – Software Engineering, Technology Explorers