Post a Job

All jobs

All jobs

Job Description


Multiple Locations: Buffalo, NY; Boulder, CO; New York, NY


Junior Data Scientist


Senior Data Scientist


Interested in driving data science innovation?


In this position you will be collaborating on data science projects with disciplined utility experts. Example projects include predicting asset failures, analyzing aerial imagery using computer vision algorithms and geo-spatial statistics, forecasting power outages at fine spatio-temporal resolution, and modeling and forecasting consumer adoption of energy efficient products and services, including demand-response programs, electric vehicles, solar, and energy storage products.


The ideal candidate will have significant experience using R, R Shiny, R Markdown, and Python (with a preference for fluency in all) in multiple analytical applications, such as geo-informatics, non-linear time series analysis, machine learning (e.g. binomial/regression models, ensemble models), deep learning (e.g. NN, CNN and RNN) and probability and statistics. As with any successful analytics project, it all starts with the data, and as such you should have experience extracting and transforming data from relational databases (e.g. Oracle, SQL Server). Time-series database experience (e.g. OSI PI) is a plus.


You will be a full-time employee of the Trove Data Science team. TROVE Data Science positions appeal to self-starters who welcome and excel in team-based, collaborative projects from conception to end-user hand off, providing an excellent Customer Experience throughout. The ability to simplify complex analyses to understandable concepts applicable to management and operations staff is highly desired.


The ideal candidate has a BS and MS or PhD in Computer Science/Math/Stats/Data Science. Work experience should consist of a proven track record of efficiently conducting and coordinating data analytics projects both independently and collaboratively over a 2+ year period (for Junior positions) and 5+ year period (for Senior positions). Exceptional educational or industry experience can offset any particular requirement. Backgrounds in diverse fields that complement data science (e.g. applied discipline expertise and statistics) is considered an asset. A broad background in the utility/power/energy sector is a plus!


Check out these LinkedIn articles for more insight on what Trove is looking for: https://www.linkedin.com/today/author/kamilgrajski


Check out the latest news on Trove’s acquisition by E Source: https://www.globenewswire.com/news-release/2020/02/12/1984128/0/en/TROVE-acquired-by-E-Source-to-add-predictive-data-science-expertise.html


Key Experience


· M.S. or Ph.D. in a numerical analysis related field of study


· Proficiency in Scientific Programming in the R or Python


· Proficiency and demonstrated experience in all phases of the Data Science Lifecycle: Data Compilation, Data Exploration, Feature Engineering, Modeling, and Communication


· Experience developing reusable R and/or Python packages


· Experience organizing interactive explorations of data into code chunks within R Notebook or Jupyter Notebook


· Experience communicating complex technical concepts to a business audience


· You have a creative mind and have a keen ability and the initiative to think “beyond”


 


Company Description

TROVE is the data science software company obsessed with making data useful. We deliver high-quality, commercial-grade, and state-of-the-art predictive data science software solutions and services for the electric utility industry. Smart grid, distributed energy renewables like storage and solar, grid electrification through electric vehicles, and many other modern technology trends are up-heaving traditional business operations for utilities. TROVE's solutions help utilities make sense of the massive amounts of data they are collecting to maintain grid resiliency and operations efficiency in the grid of the future.


See full job description

Job Description


Multiple Locations: Buffalo, NY; Boulder, CO; New York, NY


Junior Data Scientist


Senior Data Scientist


Interested in driving data science innovation?


In this position you will be collaborating on data science projects with disciplined utility experts. Example projects include predicting asset failures, analyzing aerial imagery using computer vision algorithms and geo-spatial statistics, forecasting power outages at fine spatio-temporal resolution, and modeling and forecasting consumer adoption of energy efficient products and services, including demand-response programs, electric vehicles, solar, and energy storage products.


The ideal candidate will have significant experience using R, R Shiny, R Markdown, and Python (with a preference for fluency in all) in multiple analytical applications, such as geo-informatics, non-linear time series analysis, machine learning (e.g. binomial/regression models, ensemble models), deep learning (e.g. NN, CNN and RNN) and probability and statistics. As with any successful analytics project, it all starts with the data, and as such you should have experience extracting and transforming data from relational databases (e.g. Oracle, SQL Server). Time-series database experience (e.g. OSI PI) is a plus.


You will be a full-time employee of the Trove Data Science team. TROVE Data Science positions appeal to self-starters who welcome and excel in team-based, collaborative projects from conception to end-user hand off, providing an excellent Customer Experience throughout. The ability to simplify complex analyses to understandable concepts applicable to management and operations staff is highly desired.


The ideal candidate has a BS and MS or PhD in Computer Science/Math/Stats/Data Science. Work experience should consist of a proven track record of efficiently conducting and coordinating data analytics projects both independently and collaboratively over a 2+ year period (for Junior positions) and 5+ year period (for Senior positions). Exceptional educational or industry experience can offset any particular requirement. Backgrounds in diverse fields that complement data science (e.g. applied discipline expertise and statistics) is considered an asset. A broad background in the utility/power/energy sector is a plus!


Check out these LinkedIn articles for more insight on what Trove is looking for: https://www.linkedin.com/today/author/kamilgrajski


Check out the latest news on Trove’s acquisition by E Source: https://www.globenewswire.com/news-release/2020/02/12/1984128/0/en/TROVE-acquired-by-E-Source-to-add-predictive-data-science-expertise.html


Key Experience


· M.S. or Ph.D. in a numerical analysis related field of study


· Proficiency in Scientific Programming in the R or Python


· Proficiency and demonstrated experience in all phases of the Data Science Lifecycle: Data Compilation, Data Exploration, Feature Engineering, Modeling, and Communication


· Experience developing reusable R and/or Python packages


· Experience organizing interactive explorations of data into code chunks within R Notebook or Jupyter Notebook


· Experience communicating complex technical concepts to a business audience


· You have a creative mind and have a keen ability and the initiative to think “beyond”


 


Company Description

TROVE is the data science software company obsessed with making data useful. We deliver high-quality, commercial-grade, and state-of-the-art predictive data science software solutions and services for the electric utility industry. Smart grid, distributed energy renewables like storage and solar, grid electrification through electric vehicles, and many other modern technology trends are up-heaving traditional business operations for utilities. TROVE's solutions help utilities make sense of the massive amounts of data they are collecting to maintain grid resiliency and operations efficiency in the grid of the future.


See full job description

Job Description


Our client is looking for strong Data Scientists who will be primarily responsible for retraining various predictive models using new data source and techniques with the goal to outperformance the existing models.


 Responsibilities:


-          Retrain existing predictive models using new data source and possibly new advanced techniques


-          Work comfortably with multi-terabyte and billion+ rows of data


-          Perform rigorous model evaluation, design hypothesis tests, oversee test execution and result evaluation


 Required skills/experience:


-          Advanced degree in Machine Learning, Applied Math, Computer Science, Economics, Statistics or a related quantitative field


-          At least 5 years recent experience building predictive models using SAS EG and SAS EM


-          Proficiency in SAS Enterprise Miner/ Enterprise Guide/ Base SAS is critical 


-          Proficiency in Python and R in addition to SAS is a plus


-          Solid understanding of advanced statistical concepts, especially related to modelling is a must


-          Experience working with SQL Server, Google Cloud Platform/Google Big Query and Hadoop ecosystem


Preference will be given to candidates with:


-          Practical experience in Neural Network, Random Forest, SVM and any other boosting techniques


-          Background in collaborative filtering, data mining, machine learning, optimization or statistical theory


-           


 



See full job description

Job Description


Ateroz has teamed up with a growing global technology firm looking to bring on a Data Scientist to join their product management team. They are seeking data experts who are passionate about using leading edge technology to solve unique problems in the hospitality industry.


The ideal candidate is intellectually curious about how data can be used to tell a story and help drive profitable revenue enhancing decisions for the hospitality industry. In addition, candidate must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms, creating/running simulations and must be comfortable working with a wide range of stakeholders and functional teams. 


Ideal Candidate will have:



  • Master’s in Statistics, Mathematics, Computer Science or another quantitative field

  • 3+ years of experience manipulating data sets and building statistical models in a professional environment

  • Deep knowledge of modern data warehouses, distributed data sets and analysis tools (SnowFlake, HDFS, S3, Tableau, DataRama, Jupyter Notebooks, etc)

  • Deep knowledge of numerical and statistical packages (Python, Pandas, Numby, Sklearn, R or related). 

  • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks

  • Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications

  • Strong problem-solving skills with an emphasis on developing new analytic products and services

  • Excellent written and verbal communication skills for coordinating across team


Apply now to join a growing global product team!!


Company Description

Helping Your Business Thrive With Human Capital Solutions

www.ateroz.com


See full job description

Job Description


Have you build a solid career in machine learning concepts in data science and would now like to use that brain power to network and learn from other national representatives, put together your ideas and brief them to cabinet level officers?

Utilize your data science skills and represent the United States at NATO in Norfolk, Virginia for up to three years and more with good performance. This position involves roaming around solving operational gaps in various sections and departments in investigating and recommending how AI or machine learning and algorithms can be developed to enhance decision making.

DOTMLPFI, a division of HUBZone HQ is a small veteran-owned company that takes care of its own. We will find you work because we only recruit the best minds in strategic and operational military systems thinking. We are in the business of transforming defense capabilities and we would like to take you with us, if you meet all the qualifications for this high end advisory position. We are pro-morale, we pay for referral bonuses, commissions on projects you want to contribute on, performance bonuses, three weeks paid time off, and more. Join us at DOT.

IF you meet all the quals, please submit your application immediately. We are recruiting and interviewing now...

Introduction
NATO established Headquarters Supreme Allied Commander Transformation (HQ SACT) in Norfolk, VA, in 2003 to lead transformation efforts and improve military capabilities to meet 21st century security and defense requirements. This work encompasses the need to enhance the Alliance’s ability to apply a comprehensive approach to the conduct of future operations and engagement with partners, NATO’s interaction with non-NATO entities that include states, non-state actors and international and non-governmental organizations.
Pursuant to the decision by the North Atlantic Council to implement the NATO Command Structure Adaptation, ACT has developed a Strategic Alternatives Branch (SALT). The Branch provides the Commander with alternative ideas derived through the extensive use of data science and analytics that exist outside NATO’s conventional processes in the service of Warfare Development.

Background
SALT operates under the direction of the Deputy Chief of Staff (DCOS) for Strategic Plans & Policy (SPP). It studies complex challenges relevant to the security domain and in areas beyond NATO’s traditional geographic and functional expertise. The Branch uses advanced research to improve rigour and responsiveness of transformational work. It provides scientific analysis to improve the depth and diversity of NATO’s situational awareness and understanding, risk assessment, strategy, and decision making over the mid- to long-term horizon. The Branch coordinates with internal and external stakeholders including NATO Headquarters, Allied Command Operations, the public and private sectors, international and non-governmental organisations, civil society, and academia to ensure that latest concepts, technologies, and processes are applied to relevant work. The Branch is responsible for all outputs pertaining to the Command’s Strategic Alternatives Programme of Work.

Work Tasks



  • Technical expertise in Branch regarding the application of data science and analytics to strategic alternatives. Leverages various statistical techniques to identify patterns within large data sets to make predictions and enterprise-oriented suggestions. Utilizes predictive modeling to increase and optimize the effects of Warfare Development supporting enhancement and incorporating machine learning where value-added all across the Command in differing sections and departments.

  • Strategies to identify patterns within large data sets, make predictions based on data science, and provide corresponding recommendations to senior leadership. Simplifies and communicates key findings and insights to senior leadership and draws implications for work organization-wide.

  • Predictions based on data science. This line of effort consists of working alongside key stakeholders across NATO to develop and execute data-science strategies over the long-term. This includes both strategies for data science, e.g., how to scale the use of data science in support of every level of decision-making) and strategies informed by data science, e.g., the use of data science to support the development and translation of policy between NATO HQ and the NATO Command Structure.

  • Metrics that improve NATO’s ability to assess effects in the mid- to long-term horizon. Doing so includes the development of comprehensive analytical and mapping capabilities to identify or predict relevant, feasible, and potential challenges and opportunities to NATO and partners.

  • Performs other job related duties as requested.



  • Professional Experience and Education Requirements

    • Master's degree in a data science or related field.

    • Degree in a field related to data science post-2010 such as data science, data analytics, mathematics, software engineering, or programming. Alternatively, a degree in a related field with a minimum of two years of demonstrable activity in a data science community, e.g., Kaggle, may suffice.

    • Demonstrable experience in applying data science bodies of knowledge to solving operational or product related gaps, with special consideration to machine learning and artificial intelligence.

    • Demonstrate experience in preparation and development of FOGO or civilian equivalent briefings, background papers, reports, and speeches.

    • Experience on major joint or international military staff that includes warfare development at the strategic level

    • 2 years minimum experience working in defence, corporate, or international establishments in areas of data science since 2010.

    • Experience in working on data science teams, preferably at the start-up stage since 2010

    • Possesses expertise in key field of data science since 2010; Hardware Support (less expertise required): Programming (strong expertise): Visualization (strong expertise): Mathematics (expertise required):

    • Published work in the field of data science within the last five years.

    • Experience in working in the infusion of data science-related insights into policy and guidance at the strategic level, either in government or the private, or non-profit sectors

    • Experience in global affairs beyond NATO’s traditional or functional expertise, i.e., beyond Europe or military affairs

    • Experience in alternative analysis or red-teaming efforts

    • Valid NATO SECRET security clearance or national equivalent with NATO eligibility or higher.

    • Demonstrable proficiency in English Language

    • Be able to use contemporary office tools, including MS Office



Company Description

HUBZone HQ is a veteran-owned HUBZone small business that specializes in engineering, management consulting, administrative support, and R&D services across all geographies in support of US government clients including the US Joint Staff, US Navy, US Army, US Air Force, and various US government agencies.


See full job description

Job Description


CVP is seeking a Senior Data Scientist (Clinical Research) to support the modernization of a large clinical research data repository at a national clinical research institute. The research data store is a fully operational system that supports data ingestion, management, access, and compliance across a diverse array of user communities and data types. As the national clinical research institute scales the data store to accommodate more users, to support an increasing data volume, and to deal with new types of data containing personally identifiable information, the program must transition from its current state to a fully-hardened enterprise information system that securely supports deidentified and sensitive data from thousands of mental health research projects and federates with other data repositories worldwide. 


Responsibilities


The Senior Data Scientist is a Client Service Delivery role with CVP working on an anticipated professional services contract at the national clinical research institute. The Senior Data Scientist will work with an interdisciplinary team of CVP professionals (e.g., data scientists, informaticists, system architects, devops engineers, cloud engineers, UI/UX front end developers, agile coaches, and product managers) and government research program leaders and a diverse set of stakeholders. The Senior Data Scientist, as part of the CVP team, will deliver an enhanced user experience for all user communities, ensure the privacy and security of the subjects in the data store, and improve federation into other mental health data repositories.


The clinical research data repository supports the submission, quality assurance, and provision of subject-level research data (clinical, phenotypic, neuroimaging, genomics from hundreds of NIH-funded and non-NIH funded research laboratories.  The data store directly ingests and manages up-to-date grants administration data from NIH information systems.  The Senior Data Scientist will ensure that the team follows industry best standards for stewardship, provenance, versioning, and quality of all data.  Data dissemination via the data store’s web application and provisioning to authorized user via web services and data store clients must be accurate and appropriate, in order to adhere to the FAIR principles – data must be findable, accessible, interoperable, and reusable and in order to preserve subject privacy and the confidentiality of their data.  


The Senior Data Scientist will lead a team to:



  • Maintain and update the data store’s enterprise information management strategy for all research and administrative data, to ensure data integrity, availability, and confidentiality.

  • Curate and harmonize all new research data to the data store’s data dictionary, manage data quality assurance processes, support all users’ data submission and harmonization needs, in adherence with the current data harmonization approach

  • Operate and maintain the data dictionary web service for external users and data dictionary web application for internal users

  • Update data harmonization approach as modern methods and tools become available, as appropriate and directed

  • Maintain behavioral, phenotypic, demographic, clinical, genomic, imaging, and neurosignal recording data standards, creating up-to-date data structures for such data sets, as needed

  • Create links to existing controlled vocabularies (such as LOINC or SNOMED) as directed. They will use newly emerging formats/infrastructure such as FHIR, as appropriate or as directed

  • Provide Tier 2/3 user support for data submitters who are submitting large-scale biomedical research data.

  • Oversee automated data provision to authorized users through utilization of a centralized permissions model and regular coordination with the Database Administration team

  • Manage and harmonize metadata and supporting documentation, to support auditability and versioning of research data and to provide appropriate context to secondary data users.

  • Manage administrative data to support accurate internal and external reporting

  • Maintain the integration with NIH administrative systems (including NIH’s eRA, IMPACII, and dbGaP data systems)

  • Ensure that the data model maps to NIH and data store business procedures

  • Support a data repository evaluation framework through the management of user logs and internal and external data use and publication metrics

  • Integrate appropriate unique person and data identifiers from external sources (ORCID, etc), as appropriate and directed • Generate data utilization reports, including data submission, data access, data use, data downloads, and return on investment reports for institute leadership

  • Support and extend full auditing capability for all database modifications

  • Maintain up-to-date internal and external-facing documentation of all standard operational and technical procedures


Qualifications



  • Masters Degree

  • At least 8 years of experience

  • Experience in a clinical research setting and using behavioral, phenotypic, demographic, clinical, genomic, imaging, neurological type data

  • Experience in designing and developing sophisticated, modern web applications and services

  • Experience using predictive analytics

  • Familiar with all parts of the typical ML workflow, from structuring of the problem, to wrangling of training data, to evaluation of model performance via review of different performance metrics

  • Strong knowledge of the use of R and/or Python and Javascript

  • Experience with Tableau, MicroStrategy, Power BI, Plotly, or other dashboarding/analytic tools.

  • Comfortable presenting complex topics to different audiences via oral and written communication.


 Desired Skills



  • Experience with big data tools like Hadoop and Spark

  • AWS, Azure, or Google Cloud

  • Linux

  • Links to work on Kaggle / GitHub highly appreciated

  • Masters Degree in Data Science, Information Systems, Statistics, or equivalent experience.


Company Description

CVP is based on a culture of teamwork, respect, and flexibility. With a collaborative and diverse work environment and a strong team of smart, engaged professionals, we pride ourselves on a culture that not only promotes inclusion and open communication, but also personal and professional growth.

CVP believes that through diversity an organization can truly leverage different viewpoints, expertise, and experience, creating a culture of mutual respect, professionalism, and collaboration. In other words, a better work environment and the best results for our clients. Everyone is part of the team, and everyone is willing to help. Because of this, there is never any shortage of support, and help is only a team member away.

CVP’s idea of Continuous Change also means that CVP team members are continuous learners and forward thinkers. CVP fosters the professional development of our employees.

Customer Value Partners, Inc. is a VEVRAA Federal Contractor and an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, protected veteran status, or disability. Customer Value Partners seeks to provide employment opportunities for protected veterans and individuals with disabilities.


See full job description

Job Description


As a data scientist at Triplebyte, you’ll have the opportunity to work on a variety of challenges to help us scale. You'll be part of a small team, who are leveraging data to fix technical hiring. Your day to day, will include a mix of dataset acquisition, statistical modeling, exploratory data analysis, and software engineering. You’ll report directly to Triplebytes' Head of Machine Learning and will work alongside a team of 6-8 machine learning engineers and data scientists.

Fields your work will touch on

  • Psychometrics

  • Recommender systems

  • Time series analysis

  • Survival analysis

  • Bayesian inference

  • Probabilistic programming


This is an ideal role for a data scientist who wants the scope and responsibility to own features/products from the inception and research phase through to measuring real-world results.


Company Description

Hiring is hard. It’s something our founders understand well. Before starting Triplebyte, Harj was the first partner at Y Combinator, where young companies often struggle to hire their first employees. Guillaume and Ammon were working on Socialcam (later acquired by Autodesk in 2012 for $60M), where they also experienced firsthand the challenges of hiring.

Triplebyte was founded on the belief that the current technical hiring process doesn’t do enough to help engineers show their strengths. Our founders started Triplebyte to help engineers find great jobs by assessing their abilities without relying on the prestige of their resume credentials.


See full job description

Job Description


 


Data Scientist Sr


Responsibilities Description:


Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.


 


Experience Requirements:


BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 5 years experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience leading end-to-end data science project implementation.  Experience in the healthcare sector. Experience in Deep Learning strongly preferred.


 


Required Technical Skill Set:



  1. 5 or more years of Node.js experience, fluent with Mongoose framework

  2. 5 or more years of MongoDB experience.

  3. 5 or more years of distributed computing experience with Hadoop development.

  4. 5 or more years of CRUD SQL operations.

  5. Fluent with full development cycle, continuous integration and continuous deployment, version control tools. 

  6. 5 or more years of Java, Scala or Python programming experience.


Machine learning, deep learning knowledge and experience


 


Please send more candidates if you have any.


Company Description

About Ascendum:

We are a global IT services company leveraging technology to solve business problems. Our clients include small, medium, and large firms with small, medium, and large problems. Ascendum's approach is built on the success of using the right combination of strategy, people, processes, technology, and infrastructure for each client situation, to meet specific business needs/challenges and deliver expected results.

High levels of technical expertise, commitment to exceeding customer expectations, a dedicated and highly motivated global workforce, coupled with a seamless onsite/offshore delivery model and a state-of-the-art worldwide infrastructure, gives us the competitive edge to provide comprehensive, end-to-end IT solutions to our diverse clientele.

Headquartered in the heartland of America (Cincinnati, OH) Ascendum has delivery, sales, and support offices worldwide including its offshore development center in Bangalore, and a BPO Center in Ahmedabad, India.

We welcome inquiries from applicants only (We will not accept third parties or agencies for this opportunity). Candidates must be authorized to work permanently in the United States. Due to the volume of resumes that we receive, only those candidates selected for review will be contacted. Ascendum is an equal opportunity employer.


See full job description

Job Description


TDI Technologies, Inc. is seeking candidates for a Data Scientist position. The position’s main responsibility will be to gather, format, and analyze information from Navy Hull, Mechanical, and Electrical (HM&E) systems. The position will require knowledge of machine learning methods and tools, as well as techniques such as natural language processing (NLP) and Optical Character Recognition (OCR). Candidates should be comfortable leading an engineering team, designing new software solutions, and interfacing with customers.




PRINCIPAL DUTIES/RESPONSIBILITIES:



  • Develop and test software, using Python and/or Java, to provide statistical analysis for various shipboard systems.


  • Lead a team in designing software solutions and pipelines that solve our customer’s data analytics challenges.


  • Prototype and demonstrate TDI Technologies’ data science capabilities to perspective customers.



 


EDUCATION AND EXPERIENCE REQUIREMENTS:



  • Bachelor of Science Degree in an engineering discipline - Computer Engineering, Electrical Engineering, Mechanical Engineering, Software Engineering or Computer Science is required


  • Fluent with scripting in Python


  • Strong management skills


  • 2-5 years of experience in a data science role



 


SPECIAL REQUIREMENTS:



  • Successful applicants must either have an active government security clearance or the ability to receive approval upon position acceptance.


  • Must have a valid US passport or the ability to obtain one upon position acceptance.



 


SKILLS AND ABILITIES:


Essential Skills:



  • Software development in Python


  • Experience with machine learning and/or artificial intelligence


  • Experience with at least one of the following problem types: Time-series classification, regression analysis, optimization, clustering


  • Software development and operation within Linux and Unix based systems


  • Experience managing software baselines using version control tools such as SubVersion or Git


  • Strong technical writing skills and attention to detail for documentation


  • Willingness to lead a team and convey technical problems and solutions to a variety of team members



Additional Preferred Skills:



  • Experience with one or more of the following: Natural Language Processing (NLP), Optical Character Recognition (OCR), Selenium, Beautiful Soup


  • Experience with machine learning tools and frameworks, including TensorFlow, scikit-learn, Pandas, and NumPy


  • Experience with rotating machinery, industrial automation and controls, or vibrations analysis.



 


Travel:


This position may require up to approximately 5% travel.

Location:
Philadelphia, PA
 


Equal Employment Opportunity Policy:


TDI Technologies, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type.


This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layout, recall, transfer, leaves of absence, compensation and training.




See full job description

Job Description


An established South OC-based Saas company is looking to grow with another Data Scientist. The ideal candidate is looking to continue, or grow, their career with Machine Learning whilst working with Python and SQL. This role is crucial for next steps the company will take as well as the services they provide.


Salary: 120,000 -$140,000


Qualifications:



  • Experience with Python and SQL

  • Experience with Machine Learning - Keras, Tensorflow

  • Experience with Pandas, Numpy, Scikit-Learn



Pluses:



  • R

  • Deep Learning

  • Neural Networks

  • Natural Language Processing

  • H2O

  • Big Data

  • Phd Degree in Data Science or relevant field


#zrsep



See full job description

Job Description


 


Senior Data Scientist


12+ months


Bellevue, WA


 


Responsibilities:


·         Support ML projects from strategy through implementation and on-going improvements.


·         Perform data collection, analysis, validation, cleansing, developing software in support of multiple machine learning workflows, integrating / deployment of code in a large-scale production environments and reporting.


·         Designs, codes, tests, debugs, and documents ML code - models, ETL processes, SQL queries, and stored procedures.


·         Extracts and analyzes data from various structured and unstructured sources, including databases, files, data lakes and external APIs/websites.


·         Responds to data inquiries from various groups within client’s organization.


·         Requires experience with relational databases, document databases (NOSQL) and knowledge of query tools and/or statistical software.


·         Responsible for other duties/projects as assigned by business management / leadership.


·         Qualifications


·         7 plus years of experience in statistical modeling, data mining, analytics techniques, machine learning software development and reporting


·         5 plus years of applied experience in building and deploying Machine Learning solutions using various supervised/unsupervised ML algorithms such as Linear/Logistic Regression, Support Vector Machines, (Deep) Neural Networks, Random Forest, etc., and key parameters that affect their performance.


·         5 plus years of hands-on experience with Python and/or R programming and statistical packages, and ML libraries such as scikit-learn, TensorFlow, PyTorch, etc.


Company Description

We are a technology solutions company helping organizations accelerate their business innovation and growth through project and talent solutions.


See full job description

Job Description

 LinkedIn is mandatory
Job Description :-

Preferred Education:
• BS/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or another quantitative field is preferred

Position Summary:
The Cybersecurity Engineer will work for the Global Chief Information Security Organization (CISO) office to identify, test and deploy information security solutions to secure critical data and systems throughout the IBM corporate IT environment. This hands-on role will require Cybersecurity subject matter expertise with demonstrated communication skills for active collaboration with a variety of different technology teams and business units as well as technology partners to promote security engineering practices.
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research. The successful candidate will be a self-starter, have the ability to analyze complex problems, have an insatiable curiosity to learn about new technologies, share knowledge with others and has experience working in a fast-paced Agile project environment.

Required Skills:
• At least 5 years of proven experience as a Data Scientist or Data Analyst
• At least 5 years of experience developing in languages commonly used for data analysis
such as Python, R, or SAS
• At least 3 years of theoretical and practical background in statistical analysis, machine learning, predictive modeling, and/or optimization
• At least 3 years of experience with commercial machine learning systems
• Experience working with databases such as SQL or MongoDB
• Extensive Linux system administration experience
• Experience with containerization technologies, such as Docker
• Experience installing and configuring the Apache web server, including certificate
management
• Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g.
Hadoop)
• Strong communication, planning, documentation and organizational skills
• Ability to work independently, set project goals, and achieve milestones with minimal
direction
• Ability to work collaboratively, across teams, driving toward common goals, and working
within standardized processes
Excellent communication skills

Essential Job Duties:
• Identify valuable data sources and automate collection processes
• Build predictive models and machine-learning algorithms
• Undertake preprocessing of structured and unstructured data
• Analyze large amounts of information to discover trends and patterns
• Provide break/fix support of all log routing, store and forward functions
• Work with Security Architects and Analysts to accomplish the selection and onboarding
of new data sets to designated security tools.
• Perform extraction, transformation, loading, and filtering of data sets.
• Strong communication, planning, documentation and organizational skills
• Ability to work independently, set project goals, and achieve milestones with minimal
direction
• Ability to work collaboratively, across teams, driving toward common goals, and working within standardized processes
• Solid experience supporting business critical efforts in a large enterprise environment

Company Description

E-CQR is a San Diego based consulting company and offers a broad range of business solutions and technology services to help clients in achieving their business goals. Our top notch consultants are providing services to industry's top Pharmaceutical, Biotechnology, Medical device, Clinical research, Diagnostics, Chemical, Bio Medical, Drug development, Bioinformatics and Therapeutics companies.
Our strong company culture is built on the key values of trust, professionalism, commitment, excellence, and respect. These values are an integral part of our culture and form the foundation of long-term client relationships and lasting partnerships.


See full job description

Job Description


You! Yes, we’re looking at you!


We are looking for the kind of person (YOU!) who thrives when building bridges between different functions at the company and enjoys learning new skills for the task at hand.


This Data Scientist role will be a senior technical resource that has experience in Data Modeling & Analysis, Large-Scale Data Mining and customer facing experience. You will be part of the fast-pace, entrepreneurial team that focuses on enabling Big Data and digital/real-time analytical solutions leveraging transformational technologies (R Programming, Python) to deliver cutting edge solutions.


Principle Accountabilities:


· Help build a foundation for state of the art technical and scientific capabilities to support a number of ongoing and planned data analytics projects. Stay on top of industry’s achievements and capabilities, in order to partner with our clients and provide forward-thinking recommendations.


· Working alongside machine learning experts, technologists, product managers and other informatics analysts to solve complex business problems. Responsible for identifying the data we should be collecting, devising methods of instrumenting systems to extract this information and working with technologists to develop the process to transform raw data into business insights.


· Investigate data visualization and summarization techniques for conveying key findings related to the applied analytics.



  • Maintain a good communication channel with the Project Managers, Business Analysts and Project Teams.

  • Establish strong working relationship with business analysts to understand requirements.

  • Analyze technical needs, requirements, and state of the client’s infrastructure design, integration, and operations with business analyst team and the client.

  • Follow design models, plans, internal standards, budgets, and processes.

  • Plan for how code will be organized; how the contracts between different parts of the system will look; and monitor the ongoing implementation of the project's methodologies.

  • Prepare details documents like business process flows, data flow diagrams, best practice guidelines describing project phases and tasks, deployment scenario for the solution in terms of hardware and software technologies needed etc.

  • Collaborate with Project Managers and to estimate cost involved in the solution design proposed and compare with customer's budgets; develop overall solution implementation plan.

  • Communicate and present proposed solutions to client with detailed documents prepared.

  • Provide support in implementing application installation, customization, and system integration.

  • Support development teams to review codes, when in doubt

  • Support QA teams to ensure functionalities are satisfactory and is in adherence to the design and the client requirement.

  • Provide support for projects in case of issues in development or implementation of solutions.

  • Follow knowledge management practice in capturing design notes, deliverables, methodologies, and solution development activities for traceability and troubleshooting purposes.


 


Required:



  • Advanced degree (Master’s level or higher) in mathematics, statistics, computer science, or other highly quantitative discipline is required

  • Thorough understanding of probability and statistics, econometric time series analysis and similar

  • Expertise in theory and practice of Machine Learning and Natural Language Processing

  • Requires strong programming skills

  • Experience with tools such as R Programming, Python, etc.

  • Knowledge of big data platforms such as Hadoop, Cloudera, MapR, etc.

  • Ability to facilitate discussion, drive consensus, and build partnership

  • Ability to produce high quality work products under pressure and within deadlines.

  • Ability to handle multiple tasks simultaneously and switch between tasks quickly.

  • Ability to construct an approach/plan for the execution of priorities


 


Company Description

PREDICTif is an expert in Big Data, Machine Learning/Artificial Intelligence in the Cloud, serving enterprise customers throughout North America. Since 2007, we’ve performed transformative implementations in Data Science, Information Management and Business Solutions, we deliver vision, strategy, execution and value with our proven methodology and industry certified, highly professional teams.


See full job description

Job Description


 


In a brand-new award, Falconwood will be providing support to the NAVAIR Digital Group slated to begin mid-August 2020. NAVAIR Digital Group leads the digital transformation of NAVAIR and works to accelerate and scale digital/analytic technologies and capabilities across the NAVAIR Enterprise to increase speed in the delivery and sustainment of warfighting capability. The Digital Group delivers and executes command-wide strategies that align activities and provides the workforce with agile self-service infrastructure, data accessibility, visualization and analytic tools, and digital/IT services to rapidly research, create, deploy, integrate, and maintain, applications, enterprise solutions, and other digital capabilities.


 


1 Senior Position and 1 Journeyman Position


Responsibilities



  • Develop analytics methods and processes

  • Identifies appropriate methods and tools to extract knowledge from data

  • Leverages large volumes of data to answer challenges; prepares data for use in predictive and prescriptive modeling (data cleansing)

  • Automates organizational work through scripts for data processing and analytics; develops and applies algorithms and machine learning methods

  • Applies analytic methods and software tools to design and develop analytic programs

  • Performs mathematical modeling

  • Building predictive models utilizing R

  • Utilize Python to create mathematical models, statistics, graphs, and databases



Qualifications



  • Active Secret Clearance or higher

  • Associate’s Degree plus 4 years’ additional work experience or demonstrated specialized expertise may be substituted for a Bachelor’s Degree

  • 6 years’ additional applicable work experience or demonstrated specialized expertise may be substituted for a Bachelor’s Degree

  • Bachelor’s Degree plus 4 years additional work experience or demonstrated specialized expertise may be substituted for a Master’s


 


Journeyman- (3) to ten (10) years of experience performing work related to the labor category functional description and a BA/BS degree in the applicable functional area.


Senior- (10) years of related experience and a BA/BS or MA/MS degree in the applicable functional area


Company Description

Falconwood is a small, woman-owned business providing executive level consultants and programmatic support to Department of Defense Information Technology (IT) initiatives and programs.

We provide expert advice and consultation on a diverse range of IT subjects focusing on acquisition strategy, implementation activities and Information Assurance policy and engineering.

We support the total lifecycle of Information Technology systems and applications.


See full job description

Job Description


 


Data Scientist


Company Description


DataLab USA is an analytics and technology driven database marketing consultancy.  We combine sophisticated technology, cutting edge analytics and an intrinsic understanding of marketing to build large-scale addressable marketing programs for Fortune 500 companies.  Our clients operate in multiple verticals: Financial Services, Insurance, Telcom, and Travel & Leisure. 


 


Key responsibilities/duties include:


·         Build, implement and maintain predictive models through their full lifecycle.


·         Identify patterns & trends in data, and provides insights to enhance business decision making.


·         Identify challenges and opportunities from client strategy discussions and taking ownership of solution.


·         Set up and monitor monthly model recalibration process.


·         Generate complex ad-hoc analyses combining disparate concepts or creating new approaches.


·         Design and create new analytic procedures & automation.


·         Review corporate model scoring QC.


·         Review corporate data transformation QC.


 


Education and Experience:


·         MS/PHD in a quantitative discipline or BS in a quantitative discipline with commensurate experience


·         Regular SQL use for querying data


·         Experience programming with at least one language


·         Use of statistical software (e.g., R, Python)


·         Experience with Tableau (or other data visualization platform)


·         Experience with Excel


·         Financial or insurance industry experience strongly preferred


·         3-5 years of Database Marketing and Data Mining experience


 


DataLab USA is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or national origin.


 



See full job description

Job Description


 THIS IS A CONTRACT OPPORTUNITY WITH THE POTENTIAL TO GO PERM


US CITIZENS/ GREENCARD ONLY 


Our client headquartered in Manhattan is looking for driven individuals to join our team of passionate data engineers and data scientists in creating their generation of data products and capabilities. Candidates should possess deep knowledge of machine learning algorithms along with modern data processing technology stacks and have a strong problem solving skills. We are a small and efficient team building scalable data pipelines on cloud platform for data analysis and reporting process and providing a solution with machine learning and data mining techniques.


Responsibilities
Implement machine learning based predictive models such as churn prediction.
Write production worthy code, ensuring it runs efficiently on the target system.
Apply data analytics and statistical techniques to identify relevant ML features.
Overcome common ML problems such as overfitting, cold start, and logging.
Detect flaws in the existing machine learning systems and suggest improvements.
Collaborate with management to align developed models with stakeholders’ needs and
business requirements.
Develop, deploy, and maintain the data products and systems with documentations.


Qualifications
Bachelor’s degree or Master’s degree from Computer Science, Data Science, or equivalent practical experience. Master’s degree preferred.
Strong knowledge of machine Learning algorithms.
Strong understanding of programming and computer science fundamentals.
Experience in producing algorithms and improving their efficiency and runtime.
Fluent in Python and SQL.
Experience with cloud platform is a plus.
Knowledge of Big Data technologies such as Spark or Hadoop is a plus.
Ability to learn modern data technologies.


Company Description

ABOUT DELPHI-US
Delphi-US provides world class talent acquisition solutions to our clients nationwide. We are experts in Information Technology, Engineering, Professional, and select specialized talent recruitment. With a collective 30+ years in our field, Delphi-US brings true best practices to the industry, and a fundamental approach that drives excellence in results for our clients and consultants. Known as 'The Peacemakers in The Talent War', Delphi-US seeks to join the best and brightest of our knowledge workforce to select employers of choice throughout the United States.


See full job description

Job Description


Data Scientist

Our client is a start up company in the software & Oil/gas space. As a Data Scientist you have demonstrated strong coding skills using Python and have shown an ability to draw tangible conclusions from real-world sensor data. You have strong attention to detail and enjoy a fast-paced work environment where you can take on varied and complex tasks that require independent judgement.


What your day looks like:



  • Work as part of the data science team to analyze data and produce reports for our clients

  • Develop new data-driven insights that can provide value to our clients

  • Develop machine learning models using complex datasets

  • Create and maintain tools using Python to optimize our data analysis


What you bring to the table:



  • MS or PhD in physics or similar quantitative field

  • Understanding of machine learning algorithms and advanced statistics such as: regression, time-series forecasting, clustering, decision trees, modeling, optimization, and unstructured data analysis

  • Programming in Python 4+ years’ experience

  • 2+ years of hands-on experience in working with NumPy, SciPy, Pandas, TensorFlow (or equivalent)

  • Strong quantitative orientation and attention to detail

  • Ability to:

    • compile, correlate, and compute results from large sensor time series datasets and draw tangible conclusions

    • develop machine learning algorithms and perform advanced statistical analyses

    • work in a timely manner within internal/customer driven deadlines

    • support and work closely with a myriad of colleagues in sales, operations, and product development



  • Must be able to comprehend and communicate accurately, clearly and concisely in English


You may be a fit for this role if:



  • You’re comfortable with investigating open-ended problems and coming up with concrete approaches to solve them

  • You’re a deeply curious person

  • You can wrangle data like a pro alligator wrestler and come out relatively unscathed



See full job description

Job Description


As a data scientist at Triplebyte, you’ll have the opportunity to work on a variety of challenges to help us scale. You'll be part of a small team, who are leveraging data to fix technical hiring. Your day to day, will include a mix of dataset acquisition, statistical modeling, exploratory data analysis, and software engineering. You’ll report directly to Triplebytes' Head of Machine Learning and will work alongside a team of 6-8 machine learning engineers and data scientists.

Fields your work will touch on

  • Psychometrics

  • Recommender systems

  • Time series analysis

  • Survival analysis

  • Bayesian inference

  • Probabilistic programming


This is an ideal role for a data scientist who wants the scope and responsibility to own features/products from the inception and research phase through to measuring real-world results.


Company Description

Hiring is hard. It’s something our founders understand well. Before starting Triplebyte, Harj was the first partner at Y Combinator, where young companies often struggle to hire their first employees. Guillaume and Ammon were working on Socialcam (later acquired by Autodesk in 2012 for $60M), where they also experienced firsthand the challenges of hiring.

Triplebyte was founded on the belief that the current technical hiring process doesn’t do enough to help engineers show their strengths. Our founders started Triplebyte to help engineers find great jobs by assessing their abilities without relying on the prestige of their resume credentials.


See full job description

Job Description


This position will monitor various data science processes that are in production, diagnose and fix dev ops issues as they arise, and work to enhance existing models or processes.


Roles and Responsibilities



  • Monitor production jobs for mission-critical business applications.

  • Monitor performance trends (data science model as well as software compute time and storage use).

  • Quickly address production issues as they arise.

  • Perform ad-hoc analysis on data science model performance, business value, tool adoption, etc.

  • Build enhancements into existing models to incorporate new data streams, change modeling approaches, and generally drive business value.

  • Monitor flat file interface feed from model to MyDay


Required Competencies – tools used



  • SQL

  • Snowflake

  • Informatica Cloud

  • UC4

  • GCP

  • Python

  • Dataproc

  • Airflow

  • Strong dev ops knowledge


Preferred Competencies



  • Databricks

  • Azure/AWS

  • Spark

  • MicroStrategy/Tableau

  • R

  • Algorithm design

  • Data science, operations research, or machine learning experience


Company Description

We are a talent network of data scientists and analytics experts. We help companies build or expand their analytics efforts by finding the right talent to join their teams on a project or full time basis. We combine technology with vast experience in analytics to help our clients move forward. https://chiselanalytics.com


See full job description

Job Description


Remote work location flexibility


 


Description:


Ascendum Solutions is looking for a Data Scientist, with strong NLP/NLU/NLG background to develop and maintain the NLP/NLU engine of our AI infrastructure to support several projects and company initiatives.


 


Required Skills:


Candidate must have


·       +3 years of experience with NLP/NLU – developing models/algorithms and improving its results for extracting intents, entities, topic modeling, user-defined entities, multi-languages, sentiment analysis, emotion analysis, etc.


·       +2 years of experience with Python (Neural Networks, NLTK, Spacy, Tensorflow, PyTorch, BERT


·       LSTM, etc.)


·       Critical thinker and problem-solving skills


·       Team player, good time-management skills


·       Great interpersonal and communication skills.


 


Additional Preferred Skills:


•         Linguistics or Statistics is preferred


•         SQL


•         Data Analytics


•         Data Engineering


•         Machine learning


•         Data Visualization


•         Data Modeling


Company Description

About Ascendum:

We are a global IT services company leveraging technology to solve business problems. Our clients include small, medium, and large firms with small, medium, and large problems. Ascendum's approach is built on the success of using the right combination of strategy, people, processes, technology, and infrastructure for each client situation, to meet specific business needs/challenges and deliver expected results.

High levels of technical expertise, commitment to exceeding customer expectations, a dedicated and highly motivated global workforce, coupled with a seamless onsite/offshore delivery model and a state-of-the-art worldwide infrastructure, gives us the competitive edge to provide comprehensive, end-to-end IT solutions to our diverse clientele.

Headquartered in the heartland of America (Cincinnati, OH) Ascendum has delivery, sales, and support offices worldwide including its offshore development center in Bangalore, and a BPO Center in Ahmedabad, India.

We welcome inquiries from applicants only (We will not accept third parties or agencies for this opportunity). Candidates must be authorized to work permanently in the United States. Due to the volume of resumes that we receive, only those candidates selected for review will be contacted. Ascendum is an equal opportunity employer.


See full job description

Job Description


 


The ideal candidate will be a mid-level full-stack developer/architect in support of a growing Advanced Analytics effort for IC customers to improve existing data analytic frameworks and innovate new features for a classified data analytics tool suite. Responsibilities will include: improving automation of an existing data ingestion architecture using current technologies; improving data ingestion, enrichment, and correlation algorithms; creating and maintaining web applications using current frameworks; and supporting customers with analytic methods to help improve insight into system effectiveness. The candidate must be strongly familiar with basic application and web-based programming languages, such as Java, Groovy, Python, HTML5, JavaScript, and ReactJS. Experience with Natural Language Processing and Machine Learning is highly preferred.


Required Qualifications


Bachelor's Degree


TS/SCI


U.S. Citizenship


5 Years relevant experience


8570 compliant certification required (Security+CE or higher)


 


- Experience with Natural Language processing and Machine Learning is highly preferred.


- Experience working in a fast-paced collaborative environment.


 


Salient CRGT is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, sexual orientation, gender identity or expression, veteran status, disability, genetic information, or any other factor prohibited by applicable anti-discrimination laws.


Company Description

Salient CRGT is a leading provider of health, data analytics, cloud, agile software development, mobility, cyber security, and infrastructure solutions. Headquartered in Fairfax, VA, Salient CRGT has 22 offices, plus personnel in more than 270 locations across the United States and overseas.
Our 2,400 employees support these core capabilities with full lifecycle IT services and training, helping our customers meet critical goals for pivotal missions. We are purpose-built for IT transformation supporting federal civilian, defense, homeland, and intelligence agencies, as well as Fortune 1000 companies.

Salient CRGT offers excellent benefits, including 15 days PTO, 10 holidays, and tuition reimbursement. Medical, Dental, Vision, and Short and Long-Term disability are also offered.


See full job description

Job Description


 


Responsibilities:


  • Provides expert support to run analysis and analytical experiments in a methodical manner, using both structured and unstructured data. valuate alternate models via theoretical approaches. Identify and exploit data assets, creating new patterns/business value. Create analyses and models to improve customer and operational Key Process Indicators (KPIs), enhancing measurable value for the Agency and its customers. Build predictive models, developing advanced algorithms that extract and classify information from large datasets. Analyze data, developing robust statistical and predictive algorithms/models that drive the product outputs. Identify meaningful patterns and trends, exposing new business opportunities, and translate results into understandable and actionable output that answer real business questions. Creates and maintains a training curriculum for the “Apprentice” level for Data Science, Artificial Intelligence and Machine Learning. Facilitates appropriate training courses to the DLA staff. Provide staff-augmentation support to DLA staff for Data Science, Artificial Intelligence and Machine Learning. Ability to help the ACE team reach out to other DLA organizations to identify projects suitable for using DS/AI/ML as a way to solve their problems.

 


REQUIRED QUALIFICATIONS



  • Master’s Degree in Computer Science, Statistics, Applied Math or related field.

  • Minimum Five (5) years’ relevant and practical experience with SAS, R, Python, Keras, TensorFlow or other ML engineering experience.

  • Must possess a DOD Secret Security Clearance.

  • Relevant certification meeting IAT - Level II.

  • U.S. Citizen


COMPETENCIES



  • Establish Focus

  • Change Management

  • Oral Communication

  • Written Communication

  • Interpersonal Awareness

  • Analytical Thinking

  • Conceptual Thinking

  • Strategic Thinking

  • Initiative

  • Foster Innovation

  • Results Oriented

  • Teamwork

  • Customer Service


Company Description

The cornerstone of HireResources success is in its commitment to ethical business practices and superb consumer service.
Our "Code of Ethics"​ is the foundation of this success.
Integrity - Work honestly, every day.
People - Develop and deliver diverse talent Customer
Focus - Anticipate priorities & exceed their expectations
Respect - Value all customers and collaborate with one another
Performance - Be accountable, manage risks and deliver a high level of quality.


See full job description

Job Description


POSITION SUMMARY:


The Data Scientist is a highly motivated and experienced individual who likes to work in a multi-disciplinary team. The Data Scientist will be instrumental in the analysis and interpretation of high-throughput molecular data. Additionally, the individual in this role should have a solid Data Science background and be strongly goal-oriented with a focus on real-world impact.


ESSENTIAL DUTIES AND RESPONSIBILITIES:



  • Statistical and bioinformatics analyses of high-throughput molecular data

  • Interaction with internal and external collaborators to understand, design and develop the requested solutions

  • Development and execution of data analysis protocols to support the company's discovery pipeline

  • Presentation of scientific results internally and externally

  • Preparation and submission of scientific manuscripts for review and publication

  • Other duties as assigned


REQUIREMENTS:



  • Requires a Ph.D. in Bioinformatics, Statistics, Applied Math, Computer Sciences or related field with 0-3 years of experience; or an MS with 5-7 years of industry experience

  • Proficiency in R is required. Experience in Bioconductor is preferred

  • Experience in Linux and Big Data technologies

  • Experience in Scala, C, Python, and MySQL is a plus

  • Experience in working with NGS is a plus

  • Proven ability to find creative and practical solutions to complex problems

  • Proven experience in applying Data Science methodologies to extract, process and transform data from multiple sources

  • Proven ability to deliver outputs in a comprehensive format that highlights major trends, avoid miss-interpretations and value the conclusions

  • Proven ability to demonstrate attention to detail and record-keeping

  • Quick learner, extremely flexible and adaptable to the needs of internal collaborators in a dynamic environment

  • Ability to efficiently work in multiple projects

  • Excellent communication and interpersonal skills

  • Must be able to work in a team-oriented environment

  •  


Company Description

BERG is a Boston-based biopharma company focused on taking a bold “back to biology” approach to therapeutic discovery using its unique AI-based Interrogative Biology® platform. This platform combines patient biology and artificial intelligence-based analytics to engage the differences between healthy and disease environments. The patient’s own biology drives the platform’s results and guides us in the discovery and development of drugs, diagnostics and healthcare applications. Our platform utilizes patient population health data to bring actionable Patient IntelligenceTM to precision medicine applications. This means faster discovery and development of treatments, more effective precision treatments for individuals as well as a reduction in costs to our healthcare systems.


See full job description

Job Description


The Data Scientist


This individual will work with the newly formed Data Science team for the company. That means you’ll be instrumental in making a big impact.


This team is responsible for providing analytical support, performing market analysis, identifying market trends, and telling stories with data.


Share your brilliance!


For this Data Science role, we are especially looking for a few particular traits: They must have a combination, but not having all is fine and not a deal breaker, the more the better odds.

- Looking for Big Data Engineers that well versed with big data tech like Spark, Kafka, Streams, SQL/NSQL database modeling.


Responsibilities:



  • Working with Data Science team on both internal and external data sets.

  • Collaborate and work closely with cross-functional business partners to identify gaps and structure problems.

  • Use large data sets of structured and un-structured data to Client insights.

  • Work in a fast-paced, dynamic environment to deliver results while meeting deadlines.

  • Promoting process improvement using machine learning techniques.

  • Design data visualizations to communicate complex ideas to various leadership teams.


Qualifications:



  • MS or PhD in Mathematics, Statistics, Computer Science, Engineering or other quantitative disciplines. Complete understanding of and expertise with various machine learning packages.

  • Experience applying machine learning to real-world problems.

  • Experience in feature engineering methods such as PCA, ANOVA and multivariate analysis. Comprehension of modeling techniques such as classification and regression.

  • Good understanding of various statistical methods, not limited to hypothesis testing, resampling, and Bayesian inference.

  • Fluent in R, Python (including Numpy, Pandas and Scikit-learn).

  • Experience in different distributed systems, such as Hadoop or Spark Proficient in SQL.

  • Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner.


 


Company Description

Helping Your Business Thrive With Human Capital Solutions

www.ateroz.com


See full job description

Job Description


Lead Data Scientist / Engineer - Aliso Viejo, CA


AWM Smart Shelf is reinventing retail and looking for smart, talented people to help build on our current and future data and analytics platforms, pipelines, and machine learning models. Our solutions encompass a wide range of data such as:



  • Real-time computer vision and deep learning that enables an understanding of what shoppers and workers are doing (e.g. movement, actions, product selection, etc.)

  • AWM Frictionless™ Shopping (cashierless checkout)

  • Demographic data captured via anonymous facial analysis

  • Ads/video playback and effectiveness

  • Product placement and pricing


The ideal candidate will have the ability to use modern techniques to:



  • Understand and interpret data, reaching meaningful conclusions based on solid premises

  • Predict optimizations both in real time and via big data processing

  • Handle large and rapidly growing data sets


Job Duties:



  • Contributing to the development and maintenance of data pipelines as well as creation of new ones based on AI/computer vision and other data sources

  • Validation of AI models and big data analysis

  • Performing ad-hoc analysis on proprietary datasets for internal and external stakeholders

  • Annotating proprietary datasets and giving direction to data labelers

  • Performing exploratory analysis on data to find trends

  • Training and evaluating various machine learning models


Required Skills:



  • SQL

  • Strong problem-solving / analytical skills

  • Undergraduate or higher degree in data science, statistics, computer science, or other relevant areas

  • Practical big data experience

  • Machine learning

  • Solid understanding of statistics

  • Dataset cleaning

  • Python

  • Git, or other version control framework


Skills Which Are A Plus But Not Required:



  • Azure tools such as Data Factory, Databricks, and Synapse Analytics

  • Understanding of time series/sequence analysis

  • Power BI (preferred), Tableau, or other dashboarding software

  • Understanding of, or experience with, retail data and environments

  • Experience distilling and communicating insights to people that aren’t data scientists in a way that can be readily understood

  • Understanding of agile development processes


What We Offer:



  • Paid vacation and sick time

  • Health benefits

  • A dynamic and engaging environment where you can make an impact using the latest technologies

  • Opportunity for growth

  • 401k plan

  • Potential for employee stock option plan participation


**PLEASE APPLY TO BE CONSIDERED**


Company Description

Please visit www.smartshelf.com to learn more and watch videos of our solutions.


See full job description

Job Description


Title: Senior Data Scientist
Description
The Senior Data Scientist will harness vast amounts of data to optimize business results. He/she will exercise their knowledge of descriptive and multivariate statistical techniques and applications, and database analysis tools and techniques to develop strategic insights to drive business goals. Works hard at advancing the state-of-the-art in how data is applied to solve problems in Public Transportation industry. Will also be responsible for teaching data analysts and software engineers how to work effectively with vast quantities of data.
Responsibilities


Strategy & Planning
Work with cross departmental team to define metrics, guidelines, and strategies for effective use of algorithms and data.
Identify, design, and build appropriate datasets for identification of complex Data pattern and analytics.
Create data mining and analytics architectures, coding standards, statistical reporting, and data analysis methodologies.
Establish links across existing data sources and find new, interesting mash-ups.
Coordinate data resource requirements between analytics, Application teams and Business Stakeholders.
Work with product managers, engineers, and analytics team members to translate prototypes into production
Assist in the development of data management policies and procedures.
Develop best practices for analytics instrumentation and experimentation.


Acquisition & Deployment
Conduct research and make recommendations on big data infrastructure, database technologies, analytics tools, services, protocols, and standards in support of procurement and development efforts.
Well versed in Cloud technologies and Big Data platform.
Drive the collection of new data and the refinement of existing data sources.


Operational Management
Develop algorithms and predictive models to solve critical business problems.
Analyze historical data, identify patterns come up with predictive, prescriptive, descriptive and cognitive analytics.
Develop tools and libraries that will help analytics team members more efficiently interface with huge amounts of data.
Will deep dive as the requirement demands into supervised machine learning (classification – binary target variable). Expert in handling a suite of non-parametric classification algorithms (Decisions Tree, Rule-Based classifier & Naive Bayes) as well as parametric classification algorithms (Logistic Regression, Support Vector Machine, Nearest Neighbor Classifiers) with deeper use of R, python. Expert in Un supervised learning techniques. Knows the optimal method /technique to use in different situations.
Analyze large, noisy datasets and identify meaningful patterns that provide actionable results.
Develop and automate new enhanced imputation algorithms.
Create informative visualizations that intuitively display large amounts of data and/or complex relationships.
Provide and apply quality assurance best practices for data science services across the organization.
Develop, implement, and maintain change control and testing processes for modifications to algorithms and data analytics.
Collaborate with database and disaster recovery administrators to ensure effective protection and integrity of data assets.
Manage and/or provide guidance to junior members of the analytics team.
Position Requirements


Formal Education & Certification
Minimum 5 years' experience in modeling and analysis Graduate or post graduate university degree in the field of computer science, mathematics, or statistics, Data Science and/or [10] years + equivalent work experience.
Certifications in CAP (Certified Analytics Professional ) highly desirable.


Knowledge & Experience
Well versed with CRISP-DM methodology and reference model.
Very Hands on. Has proven track record of successful Data Science implementations from industry preferably Public Transportation and Travel industry.
Responsible for building advanced analytics and models for critical business areas including schedule optimization, anomaly detection of the routes, driver utilizations, GPS data and engine fault codes.
Perform advanced quantitative and statistical analysis of large datasets to identify trends, patterns, and correlations that can be used to improve business performance.
Develop scalable, efficient, and automated processes for large scale data analyses and model development, validation, and implementation.
Utilization of machine learning and other techniques to prevent, detect/resolve route data and operational data anomalies requiring human intervention.
Core and strategic responsibility to assess, plan and develop driver scoring algorithms and model.
Collaborates with internal and external business and data teams, data engineers and analysts.
Oversee processes within the Data and Analytics team to ensure sustainable and efficient deployment of algorithm trained models.
Advanced knowledge one or more of the following: Linear and Non-Linear Models, Time Series Analysis, Random Forest, SVM, Neural Networks, Unsupervised Methods (Dimensionality Reduction, Clustering, etc.)
Probabilistic Modeling and Computation.
Machine Learning, including Deep Learning.
Advanced knowledge in R and Python.
Advanced experience with SQL skills.
Must be able to perform impact data analysis and data profiling on source data systems and determine best way to model data.
Employ data visualization tools where appropriate (ggplot, Tableau, ThinkCell, etc.).
Clearly communicate project requirements and modeling outcomes to technical and non-technical audiences.
Ability to work in a self-directed work environment.
Ability to work in a team environment.
Extensive experience solving analytical problems using quantitative approaches.
Comfort manipulating and analyzing complex, high-volume, high-dimensionality data from varying sources
Well versed with Git.
Expert in using NLP, Ensemble methods.
Do Exploratory Data analysis using SQL.
Must have machine learning and AI expertise.
Familiarity with relational, SQL and NoSQL databases.
Expert knowledge of statistical analysis tools such as R, R studio, Matlab, Rapid Miner
Good working knowledge of Reporting Analytical tools like Tableau, Power BI
Experience with very large datasets a must.
Knowledge of map/reduce framework (hive/pig other tools for accessing data in Hadoop/HBase cluster systems) a plus.
Experience in programming with JAVA, Perl, C.
Project management experience.
Good understanding of the organization's goals and objectives.
Knowledge of applicable data privacy practices and laws.


Personal Attributes
A strong passion for empirical research and for answering hard questions with data.
A flexible analytic approach that allows for results at varying levels of precision.
Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner.
Good written and oral communication skills.
Strong technical documentation skills.
Good interpersonal skills.
Highly self-motivated and directed.
Keen attention to detail.
Ability to effectively prioritize and execute tasks in a high-pressure environment.
Strong customer service orientation.
Experience working in a team-oriented, collaborative environment.


Company Description

Founded in 1990, Sunrise Systems is an award winning IT/Professional Staffing firm to Fortune 500 and State/Local Government Agencies.


See full job description

Job Description


 


Are you an experienced Data Scientist? If so, let’s talk!


Our client is seeking a Data Scientist to work at their location in Richmond, VA! This is a 12-month contract opportunity.


This role will provide technical consulting, working as a Data Scientist on a Big Data Platform. The Data Scientist will provide advanced analytics solutions to Business users using the tools specified within the Qualifications section.


Requirements:



  • At least 5 years of experience in Data Science using R/Python etc. on Hadoop platform

  • Strong skills in statistical application prototyping with expert knowledge in R and/or Python development

  • Design machine learning projects to address business problems determined by consultation with business partners.

  • Work on a variety of datasets, including both structured and unstructured data.

  • Deep knowledge of machine learning, data mining, statistical predictive modeling, and extensive experience applying these methods to real world problems

  • Extensive experience in Predictive Modeling and Machine Learning: Classification, Regression & Clustering

  • Understanding and experience working on Big Data Ecosystems is preferred: Hadoop, HDFS, Hive, Sqoop, Spark: pySpark, SparkR, SparkSQL, Jupyter & Zeppelin notebooks  

  • Understanding and/or Experience with data engineering is a plus

  • Experience with cloud technologies (AWS, Azure, GCP) is big plus


Job Requisition # 31121


APC believes that the workplace should be fun and enjoyable. Join our team today and ignite your career!


Meet APC


Company – Staffing – 501 – 1000 employees


APC is a professional services organization focused on engaging people and positively impacting lives. We offer excellent benefits and the opportunity for longevity, working with our premier fortune 100 clients. As professionals serving professionals, we take pride in providing our employees with the highest level of customer service and support, creating meaningful, fulfilling and rewarding experiences every day.


APC is committed to creating a diverse work environment and is proud to be an equal opportunity employer.? All qualified individuals will receive consideration for employment without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, disability, genetics, or veteran status.


Company Description

Meet APC

Alliance of Professionals & Consultants, Inc. (APC), is an award-winning, ISO 9001:2015 certified business in operation since 1993. Its focus is finding & placing top IT, engineering, energy, and other highly skilled talent. Additionally, APC offers a full suite of contract labor-related business solutions for mid- to large-sized companies. Headquartered in Raleigh, NC, the Native American-owned company has satellite offices throughout the US, with Professionals currently engaged on assignments in 38 US states and six countries abroad.

APC is committed to creating a diverse work environment and is proud to be an equal opportunity employer. All qualified individuals will receive consideration for employment without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, disability, genetics, or veteran status.


See full job description

Job Description


Lead Data Scientist / Engineer - Aliso Viejo, CA


AWM Smart Shelf is reinventing retail and looking for smart, talented people to help build on our current and future data and analytics platforms, pipelines, and machine learning models. Our solutions encompass a wide range of data such as:



  • Real-time computer vision and deep learning that enables an understanding of what shoppers and workers are doing (e.g. movement, actions, product selection, etc.)

  • AWM Frictionless™ Shopping (cashierless checkout)

  • Demographic data captured via anonymous facial analysis

  • Ads/video playback and effectiveness

  • Product placement and pricing


The ideal candidate will have the ability to use modern techniques to:



  • Understand and interpret data, reaching meaningful conclusions based on solid premises

  • Predict optimizations both in real time and via big data processing

  • Handle large and rapidly growing data sets


Job Duties:



  • Contributing to the development and maintenance of data pipelines as well as creation of new ones based on AI/computer vision and other data sources

  • Validation of AI models and big data analysis

  • Performing ad-hoc analysis on proprietary datasets for internal and external stakeholders

  • Annotating proprietary datasets and giving direction to data labelers

  • Performing exploratory analysis on data to find trends

  • Training and evaluating various machine learning models


Required Skills:



  • SQL

  • Strong problem-solving / analytical skills

  • Undergraduate or higher degree in data science, statistics, computer science, or other relevant areas

  • Practical big data experience

  • Machine learning

  • Solid understanding of statistics

  • Dataset cleaning

  • Python

  • Git, or other version control framework


Skills Which Are A Plus But Not Required:



  • Azure tools such as Data Factory, Databricks, and Synapse Analytics

  • Understanding of time series/sequence analysis

  • Power BI (preferred), Tableau, or other dashboarding software

  • Understanding of, or experience with, retail data and environments

  • Experience distilling and communicating insights to people that aren’t data scientists in a way that can be readily understood

  • Understanding of agile development processes


What We Offer:



  • Paid vacation and sick time

  • Health benefits

  • A dynamic and engaging environment where you can make an impact using the latest technologies

  • Opportunity for growth

  • 401k plan

  • Potential for employee stock option plan participation


**PLEASE APPLY TO BE CONSIDERED**


Company Description

Please visit www.smartshelf.com to learn more and watch videos of our solutions.


See full job description

Job Description


The Company is seeking a highly skilled and experienced Biomedical Data Scientist.   This position would be focused on managing the various scientific collaborations between the Company and our biopharma partners.  In addition, this position would have responsibilities for overseeing other scientists working on the various projects. This position requires you work closely with physicians, researchers and biopharma scientists to drive forward cutting edge research in the development of novel insights and new models of patient care


You must be a self-starter who thrives in a fast-paced, agile environment – which requires wearing many hats, being able to change direction quickly, and showing an eagerness to learn new technologies as needed.


This position will be based in our Rochester, MN office.  The position requires a minimum of 3-5 years post degree experience.


 Responsibilities:


-        Work collaboratively with physicians and researchers to develop novel insights and new models of patient care within various disease leveraging our data platform and new AI/ML models.


-        Aid in the management of other data science staff and the overall scientific project portfolio with our healthcare partners in Rochester, MN and biopharma partners.


-        Draft and review collaborative project proposals that include concise project aims, deliverables and timelines.


-        Actively participate in presenting or co-presenting collaborative work at conferences and authoring articles within peer reviewed journals.


-        Create case studies showcasing our data platform’s capabilities to collaborators and clients.


-        Contribute to client-facing applications for our knowledge synthesis platform


-        Process public and proprietary datasets for integration into our knowledge synthesis platform


Requirements:


-        Master’s degree or PhD in biological sciences, biomedical engineering, bioinformatics or mathematics


-        3-5 years post degree experience


-        Excellent understanding of disease (preferably with a background in cancer) and the relevant physiological and biological underpinnings of diseases.


-        Experience working within the healthcare industry


-        Strong understanding of various types of healthcare information and data such as laboratory results, medications, clinical notes, images (MRI, X-ray, etc.).


-        Proven track record of managing others in their day to day activities along with overseeing overall project goals.


-        Strong ability to quickly learn and adapt to new domains discoveries within healthcare.


-        Experience in running and developing new ML/AI models using Python and/or R


-        Superior verbal, written and interpersonal communication skills particularly with physicians and experiences principal investigators.


-        Track record of successfully managing multiple, concurrent projects


-        Excellent decision making, human relations, time-management and organizational skills


-        Ability to prioritize and meet deadlines in a fast-paced environment


The Company:


Located in Kendall Square in Cambridge, MA and Rochester, MN, the Company is a rapidly-growing technology organization focused on marketing our software solutions and technology platforms to the world’s foremost life science and biotechnology companies, academic institutions and hospitals to assist them in solving their most significant questions and challenges in order to serve patients.


The Company is an equal opportunity employer.


Benefit package includes competitive sick/vacation/holiday package, equity, health, dental, life insurance, short and long-term disability, and 401k plan.


 


 



See full job description

Job Description


The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers. 


 


Responsibilities



  • Analyze raw data: assessing quality, cleansing, structuring for downstream processing

  • Design accurate and scalable prediction algorithms

  • Collaborate with engineering team to bring analytical prototypes to production

  • Generate actionable insights for business improvements


 


Qualifications


 



  • Bachelor's degree or equivalent experience in quantative field (Statistics, Mathematics, Computer Science, Engineering, etc.)

  • At least 1 – 2 years' of experience in quantitative analytics or data modeling

  • Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms

  • Fluency in a programming language (Python, C,C++, Java, SQL)

  • Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau)


Company Description

Headfarmer is a boutique recruiting firm specializing in the permanent and contract placement of the upper echelon of talent in the greater Phoenix area. We offer a unique process of "headfarming" which provides a level of professional support to both candidates and clients that exceeds recruiting industry standards.


See full job description
Filters
Receive Data Scientist jobs in in your inbox.
Receive jobs in your inbox

I agree to Localwise’s Terms & Privacy