Cutshort logo
Senior Software Engineer
GroundtRuth's logo

Senior Software Engineer

at GroundtRuth

DP
Posted by Priti Singh
icon
Remote only
icon
7 - 12 yrs
icon
₹15L - ₹32L / yr
icon
Full time
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Amazon Web Services (AWS)
Python
Data Structures

You will:

  • Create highly scalable AWS micro-services utilizing cutting edge cloud technologies.
  • Design and develop Big Data pipelines handling huge geospatial data.
  • Bring clarity to large complex technical challenges.
  • Collaborate with Engineering leadership to help drive technical strategy.
  • Project scoping, planning and estimation.
  • Mentor and coach team members at different levels of experience.
  • Participate in peer code reviews and technical meetings.
  • Cultivate a culture of engineering excellence.
  • Seek, implement and adhere to standards, frameworks and best practices in the industry.
  • Participate in on-call rotation.

You have:

  • Bachelor’s/Master’s degree in computer science, computer engineering or relevant field.
  • 5+ years of experience in software design, architecture and development.
  • 5+ years of experience using object-oriented languages (Java, Python).
  • Strong experience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etc.
  • Strong experience in working with different AWS technologies.
  • Excellent competencies in data structures & algorithms.

Nice to have:

  • Proven track record of delivering large scale projects, and an ability to break down large tasks into smaller deliverable chunks
  • Experience in developing high throughput low latency backend services
  • Affinity to spatial data structures and algorithms.
  • Familiarity with Postgres DB, Google Places or Mapbox APIs

What we offer

At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.

  • Unlimited Paid Time Off
  • In Office Daily Catered Lunch
  • Fully stocked snacks/beverages
  • 401(k) employer match
  • Health coverage including medical, dental, vision and option for HSA or FSA
  • Generous parental leave
  • Company-wide DEIB Committee
  • Inclusion Academy Seminars
  • Wellness/Gym Reimbursement
  • Pet Expense Reimbursement
  • Company-wide Volunteer Day
  • Education reimbursement program
  • Cell phone reimbursement
  • Equity Analysis to ensure fair pay
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort

About GroundtRuth

Founded
2009
Type
Size
Stage
Profitable
About
GroundTruth is the leading location-based marketing and advertising technology company. Brands, agencies, small businesses, and non-profits trust our performance-driven solutions to help them reach consumers during moments of intent that generate important business outcomes. GroundTruth’s suite of geo-contextual omnichannel products and services are available at scale through our self-serve advertising platform, managed services, and industry reseller partnerships. GroundTruth’s marketing platform is powered by a unique data set called "visitation data" accredited by the Media Rating Council (MRC). Our proprietary cleansing processes combine contextual mapping technology (BlueprintsTM), owned and operated properties, and third-party mobile location data, together yielding over 30 billion visits annually.
Read more
Connect with the team
icon
Priti Singh
icon
Sumit Paul
Company social profiles
icon
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
at CoffeeBeans Consulting
7 recruiters
DP
Posted by Nelson Xavier
Bengaluru (Bangalore), Pune, Hyderabad
4 - 8 yrs
₹10L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

Job responsibilities

- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges

- You will pair to write clean and iterative code based on TDD

- Leverage various continuous delivery practices to deploy, support and operate data pipelines

- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

- Create data models and speak to the tradeoffs of different modeling approaches

- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

- Encouraging open communication and advocating for shared outcomes

 

Technical skills

- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop

- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems

- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

 



Professional skills

- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

- An interest in coaching, sharing your experience and knowledge with teammates

- You enjoy influencing others and always advocate for technical excellence while being open to change when needed

- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more

Read more
at Phenom People
5 recruiters
DP
Posted by Srivatsav Chilukoori
Hyderabad
3 - 6 yrs
₹10L - ₹18L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Python
Deep Learning
+4 more

JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
 

Company Description

Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
 

Job Responsibilities:

  • Design and implement machine learning, information extraction, probabilistic matching algorithms and models
  • Research and develop innovative, scalable and dynamic solutions to hard problems
  • Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
  • Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
  • Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
  • Be a valued contributor in shaping the future of our products and services
  • You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
  • Be part of a fast pace, fun-focused, agile team

Job Requirement:

  • 4+ years of industry experience
  • Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
  • Strong mathematics, statistics, and data analytics
  • Solid coding and engineering skills preferably in Machine Learning (not mandatory)
  • Proficient in Java, Python, and Scala
  • Industry experience building and productionizing end-to-end systems
  • Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
  • Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.


Position Summary

We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.

  • Capable of building accurate machine learning models is the main goal of a machine learning engineer
  • Linear Algebra, Applied Statistics and Probability
  • Building Data Models
  • Strong knowledge of NLP
  • Good understanding of multithreaded and object-oriented software development
  • Mathematics, Mathematics and Mathematics
  • Collaborate with Data Engineers to prepare data models required for machine learning models
  • Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
  • Build large-scale software systems and numerical computation topics
  • Use predictive analytics and data mining to solve complex problems and drive business decisions
  • Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
  • Tackle situations where problem is unknown and the Solution is unknown
  • Solve analytical problems, and effectively communicate methodologies and results to the customers
  • Adept at translating business needs into technical requirements and translating data into actionable insights
  • Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.

Benefits

  • Competitive salary for a startup
  • Gain experience rapidly
  • Work directly with the executive team
  • Fast-paced work environment

 

About Phenom People

At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.

 

 Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.

 We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.

 Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.

 We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.

 Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.

 

Kindly explore about the company phenom ( https://www.phenom.com/ )
Youtube -  https://www.youtube.com/c/PhenomPeople
LinkedIn -  https://www.linkedin.com/company/phenompeople/

Phenom | Talent Experience Management

Read more
DP
Posted by Stephen FitzGerald
Remote only
2 - 8 yrs
₹10L - ₹25L / yr
SQL server
PowerBI
Spotfire
Qlikview
Tableau
+11 more

Senior Product Analyst

Pampers Start Up Team

India / Remote Working

 

 

Team Description

Our internal team focuses on App Development with data a growing area within the structure. We have a clear vision and strategy which is coupled up with App Development, Data, Testing, Solutions and Operations. The data team sits across the UK and India whilst other teams sit across Dubai, Lebanon, Karachi and various cities in India.

 

Role Description

In this role you will use a range of tools and technologies to primarily working on providing data design, data governance, reporting and analytics on the Pampers App.

 

This is a unique opportunity for an ambitious candidate to join a growing business where they will get exposure to a diverse set of assignments, can contribute fully to the growth of the business and where there are no limits to career progression and reward.

 

Responsibilities

● To be the Data Steward and drive governance having full understanding of all the data that flows through the Apps to all systems

● Work with the campaign team to do data fixes when issues with campaigns

● Investigate and troubleshoot issues with product and campaigns giving clear RCA and impact analysis

● Document data, create data dictionaries and be the “go to” person in understanding what data flows

● Build dashboards and reports using Amplitude, Power BI and present to the key stakeholders

● Carry out adhoc data investigations into issues with the app and present findings back querying data in BigQuery/SQL/CosmosDB

● Translate analytics into a clear powerpoint deck with actionable insights

● Write up clear documentation on processes

● Innovate with new processes or ways of providing analytics and reporting

● Help the data lead to find new ways of adding value

 

 

Requirements

● Bachelor’s degree and a minimum of 4+ years’ experience in an analytical role preferably working in product analytics with consumer app data

● Strong SQL Server and Power BI required

● You have experience with most or all of these tools – SQL Server, Python, Power BI, BigQuery.

● Understanding of mobile app data (Events, CTAs, Screen Views etc)

● Knowledge of data architecture and ETL

● Experience in analyzing customer behavior and providing insightful recommendations

● Self-starter, with a keen interest in technology and highly motivated towards success

● Must be proactive and be prepared to address meetings

● Must show initiative and desire to learn business subjects

● Able to work independently and provide updates to management

● Strong analytical and problem-solving capabilities with meticulous attention to detail

● Excellent problem-solving skills; proven teamwork and communication skills

● Experience working in a fast paced “start-up like” environment

 

Desirable

  • Knowledge of mobile analytical tools (Segment, Amplitude, Adjust, Braze and Google Analytics)
  • Knowledge of loyalty data
Read more
at Statusneo
6 recruiters
DP
Posted by Alex P
Hyderabad, Bengaluru (Bangalore), Gurugram
3 - 7 yrs
₹2L - ₹20L / yr
Big Data
Data engineering
Scala
Apache Hive
Hadoop
+3 more

Bigdata JD :

 

Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix

 

Data engineering services required:

  • Builddataproducts and processes alongside the core engineering and technology team
  • Collaborate with seniordatascientists to curate, wrangle, and prepare data for use in their advanced analytical models
  • Integratedatafrom a variety of sources, assuring that they adhere to data quality and accessibility standards
  • Modify and improvedataengineering processes to handle ever larger, more complex, and more types of data sources and pipelines
  • Use Hadoop architecture and HDFS commands to design and optimizedataqueries at scale
  • Evaluate and experiment with noveldataengineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases

Big data engineering skills:

  • Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes ofdatafrom multiple sources and systems into enterprise analytics platforms
  • Proven ability to design and optimize queries to build scalable, modular, efficientdatapipelines
  • Ability to work across structured, semi-structured, and unstructureddata, extracting information and identifying linkages across disparatedata sets
  • Proven experience delivering production-readydataengineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance
  • Ability to operate with a variety ofdataengineering tools and technologies; vendor agnostic candidates preferred

Domain and industry knowledge:

  • Strong collaboration and communication skills to work within and across technology teams and business units
  • Demonstrates the curiosity, interpersonal abilities, and organizational skills necessary to serve as a consulting partner, includes the ability to uncover, understand, and assess the needs of various business stakeholders
  • Experience with problem discovery, solution design, and insight delivery that involves frequent interaction, education, engagement, and evangelism with senior executives
  • Ideal candidate will have extensive experience with the creation and delivery of advanced analytics solutions for healthcare payers or insurance companies, including anomaly detection, provider optimization, studies of sources of fraud, waste, and abuse, and analysis of clinical and economic outcomes of treatment and wellness programs involving medical or pharmacy claimsdata, electronic medical recorddata, or other health data
  • Experience with healthcare providers, pharma, or life sciences is a plus

 

Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹25L - ₹28L / yr
Data Science
Data Scientist
R Programming
Python
SQL
  • Banking Domain
  • Assist the team in building Machine learning/AI/Analytics models on open-source stack using Python and the Azure cloud stack.
  • Be part of the internal data science team at fragma data - that provides data science consultation to large organizations such as Banks, e-commerce Cos, Social Media companies etc on their scalable AI/ML needs on the cloud and help build POCs, and develop Production ready solutions.
  • Candidates will be provided with opportunities for training and professional certifications on the job in these areas - Azure Machine learning services, Microsoft Customer Insights, Spark, Chatbots, DataBricks, NoSQL databases etc.
  • Assist the team in conducting AI demos, talks, and workshops occasionally to large audiences of senior stakeholders in the industry.
  • Work on large enterprise scale projects end-to-end, involving domain specific projects across banking, finance, ecommerce, social media etc.
  • Keen interest to learn new technologies and latest developments and apply them to projects assigned.
Desired Skills
  • Professional Hands-on coding experience in python for over 1 year for Data scientist, and over 3 years for Sr Data Scientist. 
  • This is primarily a programming/development-oriented role - hence strong programming skills in writing object-oriented and modular code in python and experience of pushing projects to production is important.
  • Strong foundational knowledge and professional experience in 
  • Machine learning, (Compulsory)
  • Deep Learning (Compulsory)
  • Strong knowledge of At least One of : Natural Language Processing or Computer Vision or Speech Processing or Business Analytics
  • Understanding of Database technologies and SQL. (Compulsory)
  • Knowledge of the following Frameworks:
  • Scikit-learn (Compulsory)
  • Keras/tensorflow/pytorch (At least one of these is Compulsory)
  • API development in python for ML models (good to have)
  • Excellent communication skills.
  • Excellent communication skills are necessary to succeed in this role, as this is a role with high external visibility, and with multiple opportunities to present data science results to a large external audience that will include external VPs, Directors, CXOs etc.  
  • Hence communication skills will be a key consideration in the selection process.
Read more
Pune
2 - 6 yrs
₹12L - ₹16L / yr
SQL
ETL
Data engineering
Big Data
Java
+2 more
  • Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
  • Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
  • Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
  • Build data pipelines that clean, transform, and aggregate data from disparate sources
  • Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
  • Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
  • Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
  • Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.

Job Qualifications:

  • Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
  • 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
  • Technical expertise with data models, data mining.
  • Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
  • Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
  • Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
  • Hands-on knowledge in SQL and No-SQL database design.
  • Having knowledge in CI/CD for the building and hosting of the solutions.
  • Having AWS certification is an added advantage.
  • Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
  • A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
  • Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
  • A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists
Read more
Remote only
4 - 8 yrs
₹10L - ₹20L / yr
Python
Ruby
Ruby on Rails (ROR)
Data Structures
Algorithms
+4 more

About the Company:

 It is a Data as a Service company that helps businesses harness the power of data. Our technology fuels some of the most interesting big data projects of the word. We are a small bunch of people working towards shaping the imminent data-driven future by solving some of its fundamental and toughest challenges. 

 

 

Role: We are looking for an experienced team lead to drive data acquisition projects end to end. In this role, you will be working in the web scraping team with data engineers, helping them solve complex web problems and mentor them along the way. You’ll be adept at delivering large-scale web crawling projects, breaking down barriers for your team and planning at a higher level, and getting into the detail to make things happen when needed.  

 

Responsibilities  

  •  Interface with clients and sales team to translate functional requirements into technical requirements 
  •  Plan and estimate tasks with your team, in collaboration with the delivery managers 
  •  Engineer complex data acquisition projects 
  •  Guide and mentor your team of engineers 
  •  Anticipate issues that might arise and proactively consider those into design 
  •  Perform code reviews and suggest design changes 

 

 

Prerequisites 

  •  Between 5-8 years of relevant experience 
  • Fluent programming skills and well-versed with scripting languages like Python or Ruby 
  • Solid foundation in data structures and algorithms 
  • Excellent tech troubleshooting skills 
  • Good understanding of web data landscape 
  • Prior exposure to DOM, XPATH and hands on experience with selenium/automated testing is a plus 

 

Skills and competencies 

  • Prior experience with team handling and people management is mandatory 
  • Work independently with little to no supervision 
  • Extremely high attention to detail  
  •  Ability to juggle between multiple projects  
Read more
at One Labs
1 recruiter
DP
Posted by Rahul Gupta
NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹3L - ₹6L / yr
Data Science
Deep Learning
Python
Keras
TensorFlow
+1 more

Job Description


We are looking for a data scientist that will help us to discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. 

Responsibilities

  • Selecting features, building and optimizing classifiers using machine learning techniques
  • Data mining using state-of-the-art methods
  • Extending company’s data with third party sources of information when needed
  • Enhancing data collection procedures to include information that is relevant for building analytic systems
  • Processing, cleansing, and verifying the integrity of data used for analysis
  • Doing ad-hoc analysis and presenting results in a clear manner
  • Creating automated anomaly detection systems and constant tracking of its performance

Skills and Qualifications

  • Excellent understanding of machine learning techniques and algorithms, such as Linear regression, SVM, Decision Forests, LSTM, CNN etc.
  • Experience with Deep Learning preferred.
  • Experience with common data science toolkits, such as R, NumPy, MatLab, etc. Excellence in at least one of these is highly desirable
  • Great communication skills
  • Proficiency in using query languages such as SQL, Hive, Pig 
  • Good applied statistics skills, such as statistical testing, regression, etc.
  • Good scripting and programming skills 
  • Data-oriented personality
Read more
at Foghorn Systems
1 recruiter
DP
Posted by Abhishek Vijayvargia
Pune
0 - 7 yrs
₹15L - ₹50L / yr
R Programming
Python
Data Science

Role and Responsibilities

  • Execute data mining projects, training and deploying models over a typical duration of 2 -12 months.
  • The ideal candidate should be able to innovate, analyze the customer requirement, develop a solution in the time box of the project plan, execute and deploy the solution.
  • Integrate the data mining projects embedded data mining applications in the FogHorn platform (on Docker or Android).

Core Qualifications
Candidates must meet ALL of the following qualifications:

  • Have analyzed, trained and deployed at least three data mining models in the past. If the candidate did not directly deploy their own models, they will have worked with others who have put their models into production. The models should have been validated as robust over at least an initial time period.
  • Three years of industry work experience, developing data mining models which were deployed and used.
  • Programming experience in Python is core using data mining related libraries like Scikit-Learn. Other relevant Python mining libraries include NumPy, SciPy and Pandas.
  • Data mining algorithm experience in at least 3 algorithms across: prediction (statistical regression, neural nets, deep learning, decision trees, SVM, ensembles), clustering (k-means, DBSCAN or other) or Bayesian networks

Bonus Qualifications
Any of the following extra qualifications will make a candidate more competitive:

  • Soft Skills
    • Sets expectations, develops project plans and meets expectations.
    • Experience adapting technical dialogue to the right level for the audience (i.e. executives) or specific jargon for a given vertical market and job function.
  • Technical skills
    • Commonly, candidates have a MS or Ph.D. in Computer Science, Math, Statistics or an engineering technical discipline. BS candidates with experience are considered.
    • Have managed past models in production over their full life cycle until model replacement is needed. Have developed automated model refreshing on newer data. Have developed frameworks for model automation as a prototype for product.
    • Training or experience in Deep Learning, such as TensorFlow, Keras, convolutional neural networks (CNN) or Long Short Term Memory (LSTM) neural network architectures. If you don’t have deep learning experience, we will train you on the job.
    • Shrinking deep learning models, optimizing to speed up execution time of scoring or inference.
    • OpenCV or other image processing tools or libraries
    • Cloud computing: Google Cloud, Amazon AWS or Microsoft Azure. We have integration with Google Cloud and are working on other integrations.
    • Decision trees like XGBoost or Random Forests is helpful.
    • Complex Event Processing (CEP) or other streaming data as a data source for data mining analysis
    • Time series algorithms from ARIMA to LSTM to Digital Signal Processing (DSP).
    • Bayesian Networks (BN), a.k.a. Bayesian Belief Networks (BBN) or Graphical Belief Networks (GBN)
    • Experience with PMML is of interest (see www.DMG.org).
  • Vertical experience in Industrial Internet of Things (IoT) applications:
    • Energy: Oil and Gas, Wind Turbines
    • Manufacturing: Motors, chemical processes, tools, automotive
    • Smart Cities: Elevators, cameras on population or cars, power grid
    • Transportation: Cars, truck fleets, trains

 

About FogHorn Systems
FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT application solutions. FogHorn’s Lightning software platform brings the power of advanced analytics and machine learning to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as Smart Grid, Smart City, Smart Building and connected vehicle applications.

Press:   https://www.foghorn.io/press-room/

Awards:  https://www.foghorn.io/awards-and-recognition/

  • 2019 Edge Computing Company of the Year – Compass Intelligence
  • 2019 Internet of Things 50: 10 Coolest Industrial IoT Companies – CRN
  • 2018 IoT Planforms Leadership Award & Edge Computing Excellence – IoT Evolution World Magazine
  • 2018 10 Hot IoT Startups to Watch – Network World. (Gartner estimated 20 billion connected things in use worldwide by 2020)
  • 2018 Winner in Artificial Intelligence and Machine Learning – Globe Awards
  • 2018 Ten Edge Computing Vendors to Watch – ZDNet & 451 Research
  • 2018 The 10 Most Innovative AI Solution Providers – Insights Success
  • 2018 The AI 100 – CB Insights
  • 2017 Cool Vendor in IoT Edge Computing – Gartner
  • 2017 20 Most Promising AI Service Providers – CIO Review

Our Series A round was for $15 million.  Our Series B round was for $30 million October 2017.  Investors include: Saudi Aramco Energy Ventures, Intel Capital, GE, Dell, Bosch, Honeywell and The Hive.

About the Data Science Solutions team
In 2018, our Data Science Solutions team grew from 4 to 9.  We are growing again from 11. We work on revenue generating projects for clients, such as predictive maintenance, time to failure, manufacturing defects.  About half of our projects have been related to vision recognition or deep learning. We are not only working on consulting projects but developing vertical solution applications that run on our Lightning platform, with embedded data mining.

Our data scientists like our team because:

  • We care about “best practices”
  • Have a direct impact on the company’s revenue
  • Give or receive mentoring as part of the collaborative process
  • Questions and challenging the status quo with data is safe
  • Intellectual curiosity balanced with humility
  • Present papers or projects in our “Thought Leadership” meeting series, to support continuous learning

 

Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort