About FarmGuide
Similar jobs
We are searching for an accountable, multitalented data engineer to facilitate the operations of our data scientists. The data engineer will be responsible for employing machine learning techniques to create and sustain structures that allow for the analysis of data while remaining familiar with dominant programming and deployment strategies in the field. During various aspects of this process, you should collaborate with coworkers to ensure that your approach meets the needs of each project.
To ensure success as a data engineer, you should demonstrate flexibility, creativity, and the capacity to receive and utilize constructive criticism. A formidable data engineer will demonstrate unsatiated curiosity and outstanding interpersonal skills.
Responsibilities:
- Liaising with coworkers and clients to elucidate the requirements for each task.
- Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed.
- Reformulating existing frameworks to optimize their functioning.
- Testing such structures to ensure that they are fit for use.
- Preparing raw data for manipulation by data scientists.
- Detecting and correcting errors in your work.
- Ensuring that your work remains backed up and readily accessible to relevant coworkers.
- Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Requirements:
- Bachelor's degree in data engineering, big data analytics, computer engineering, or related field.
- Master's degree in a relevant field is advantageous.
- Proven experience as a data engineer, software developer, or similar.
- Expert proficiency in Python, C++, Java, R, and SQL.
- Familiarity with Hadoop or suitable equivalent.
- Excellent analytical and problem-solving skills.
- A knack for independence and group work.
- Scrupulous approach to duties.
- Capacity to successfully manage a pipeline of duties with minimal supervision.
Generation Engineer
We are looking for a python developer who has a passion to drive more solar and clean energy in the world working with us. The software helps anyone understand how much solar could be put up on a rooftop and calculates how many units of clean energy the solar PV system would generate, along with how much savings the homeowner would have. This is a crucial step in helping educate people who want to go solar, but aren’t completely convinced of solar's value proposition. If you are interested in bringing the latest technologies to the fast-growing solar industry and want to help society transition to a more sustainable future, we would love to hear from you!
You will -
- Be an early employee at a growing startup and help shape the team culture
- Safeguard code quality on their team, reviewing others’ code with an eye to performance and maintainability
- Be trusted to take point on complex product initiatives
- Work in a ownership driven, micro-management free environment
You should have:
- Strong programming fundamentals. (if you don’t officially have a CS degree but know programming, it’s fine with us!)
- Have a strong problem solving attitude.
- Experience with solar or electrical modelling is a plus, although not required.
Experience 3 to 8 Years
Skill Set
- experience in algorithm development with a focus on signal processing, pattern recognition, machine learning, classification, data mining, and other areas of machine intelligence.
- Ability to analyse data streams from multiple sensors and develop algorithms to extract accurate and meaningful sport metrics.
- Should have a deeper understanding of IMU sensors and Biosensors like HRM, ECG
- A good understanding on Power and memory management on embedded platform
- Expertise in the design of multitasking, event-driven, real-time firmware using C and understanding of RTOS concepts
- Knowledge of Machine learning, Analytical and methodical approaches to data analysis and verification and Python
- Prior experience on fitness algorithm development using IMU sensor
- Interest in fitness activities and knowledge of human body anatomy
Senior Data Scientist (Health Metrics)
at Biostrap
Introduction
The Biostrap platform extracts many metrics related to health, sleep, and activity. Many algorithms are designed through research and often based on scientific literature, and in some cases they are augmented with or entirely designed using machine learning techniques. Biostrap is seeking a Data Scientist to design, develop, and implement algorithms to improve existing metrics and measure new ones.
Job Description
As a Data Scientist at Biostrap, you will take on projects to improve or develop algorithms to measure health metrics, including:
- Research: search literature for starting points of the algorithm
- Design: decide on the general idea of the algorithm, in particular whether to use machine learning, mathematical techniques, or something else.
- Implement: program the algorithm in Python, and help deploy it.
The algorithms and their implementation will have to be accurate, efficient, and well-documented.
Requirements
- A Master’s degree in a computational field, with a strong mathematical background.
- Strong knowledge of, and experience with, different machine learning techniques, including their theoretical background.
- Strong experience with Python
- Experience with Keras/TensorFlow, and preferably also with RNNs
- Experience with AWS or similar services for data pipelining and machine learning.
- Ability and drive to work independently on an open problem.
- Fluency in English.
Data Engineer
Technical Knowledge (Must Have)
- Strong experience in SQL / HiveQL/ AWS Athena,
- Strong expertise in the development of data pipelines (snaplogic is preferred).
- Design, Development, Deployment and administration of data processing applications.
- Good Exposure towards AWS and Azure Cloud computing environments.
- Knowledge around BigData, AWS Cloud Architecture, Best practices, Securities, Governance, Metadata Management, Data Quality etc.
- Data extraction through various firm sources (RDBMS, Unstructured Data Sources) and load to datalake with all best practices.
- Knowledge in Python
- Good knowledge in NoSQL technologies (Neo4J/ MongoDB)
- Experience/knowledge in SnapLogic (ETL Technologies)
- Working knowledge on Unix (AIX, Linux), shell scripting
- Experience/knowledge in Data Modeling. Database Development
- Experience/knowledge creation of reports and dashboards in Tableau/ PowerBI
We are looking for a Data Analyst that oversees organisational data analytics. This will require you to design and help implement the data analytics platform that will keep the organisation running. The team will be the go-to for all data needs for the app and we are looking for a self-starter who is hands on and yet able to abstract problems and anticipate data requirements.
This person should be very strong technical data analyst who can design and implement data systems on his own. Along with him, he also needs to be proficient in business reporting and should have keen interest in provided data needed for business.
Tools familiarity: SQL, Python, Mix panel, Metabase, Google Analytics, Clever Tap, App Analytics
Responsibilities
- Processes and frameworks for metrics, analytics, experimentation and user insights, lead the data analytics team
- Metrics alignment across teams to make them actionable and promote accountability
- Data based frameworks for assessing and strengthening Product Market Fit
- Identify viable growth strategies through data and experimentation
- Experimentation for product optimisation and understanding user behaviour
- Structured approach towards deriving user insights, answer questions using data
- This person needs to closely work with Technical and Business teams to get this implemented.
Skills
- 4 to 6 years at a relevant role in data analytics in a Product Oriented company
- Highly organised, technically sound & good at communication
- Ability to handle & build for cross functional data requirements / interactions with teams
- Great with Python, SQL
- Can build, mentor a team
- Knowledge of key business metrics like cohort, engagement cohort, LTV, ROAS, ROE
Eligibility
BTech or MTech in Computer Science/Engineering from a Tier1, Tier2 colleges
Good knowledge on Data Analytics, Data Visualization tools. A formal certification would be added advantage.
We are more interested in what you CAN DO than your location, education, or experience levels.
Send us your code samples / GitHub profile / published articles if applicable.
Internship- JAVA / Python / AI / ML
at Wise Source
bachelor’s degree or equivalent experience
● Knowledge of database fundamentals and fluency in advanced SQL, including concepts
such as windowing functions
● Knowledge of popular scripting languages for data processing such as Python, as well as
familiarity with common frameworks such as Pandas
● Experience building streaming ETL pipelines with tools such as Apache Flink, Apache
Beam, Google Cloud Dataflow, DBT and equivalents
● Experience building batch ETL pipelines with tools such as Apache Airflow, Spark, DBT, or
custom scripts
● Experience working with messaging systems such as Apache Kafka (and hosted
equivalents such as Amazon MSK), Apache Pulsar
● Familiarity with BI applications such as Tableau, Looker, or Superset
● Hands on coding experience in Java or Scala
Responsibilities
- Research and test novel machine learning approaches for analysing large-scale distributed computing applications.
- Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy, efficacy, and robustness
- Work closely with other functional teams to integrate implemented systems into the SaaS platform
- Suggest innovative and creative concepts and ideas that would improve the overall platform
Qualifications
The ideal candidate must have the following qualifications:
- 5 + years experience in practical implementation and deployment of large customer-facing ML based systems.
- MS or M Tech (preferred) in applied mathematics/statistics; CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills
- In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python is a must
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Most importantly, you should be someone who is passionate about building new and innovative products that solve tough real-world problems.
Location
Chennai, India