About FarmGuide
Similar jobs
Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.
What your day job looks like:
- Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
- Develop data extraction and manipulation code based on business rules
- Develop automated and manual test cases for the code written
- Design and construct data store and procedures for their maintenance
- Perform data extract, transform, and load activities from several data sources.
- Develop and maintain strong relationships with stakeholders
- Write high quality code as per prescribed standards.
- Participate in internal projects as required
Minimum qualifications:
- B. Tech./MCA or equivalent preferred
- Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.
What you’ll bring:
- Strong experience in working with Snowflake, SQL, PHP/Python.
- Strong Experience in writing complex SQLs
- Good Communication skills
- Good experience of working with any BI tool like Tableau, Power BI.
- Sqoop, Spark, EMR, Hadoop/Hive are good to have.
Requirements:
- 2+ years of experience (4+ for Senior Data Engineer) with system/data integration, development or implementation of enterprise and/or cloud software Engineering degree in Computer Science, Engineering or related field.
- Extensive hands-on experience with data integration/EAI technologies (File, API, Queues, Streams), ETL Tools and building custom data pipelines.
- Demonstrated proficiency with Python, JavaScript and/or Java
- Familiarity with version control/SCM is a must (experience with git is a plus).
- Experience with relational and NoSQL databases (any vendor) Solid understanding of cloud computing concepts.
- Strong organisational and troubleshooting skills with attention to detail.
- Strong analytical ability, judgment and problem-solving techniques Interpersonal and communication skills with the ability to work effectively in a cross functional team.
Job Title: Data Warehouse/Redshift Admin
Location: Remote
Job Description
AWS Redshift Cluster Planning
AWS Redshift Cluster Maintenance
AWS Redshift Cluster Security
AWS Redshift Cluster monitoring.
Experience managing day to day operations of provisioning, maintaining backups, DR and monitoring of AWS RedShift/RDS clusters
Hands-on experience with Query Tuning in high concurrency environment
Expertise setting up and managing AWS Redshift
AWS certifications Preferred (AWS Certified SysOps Administrator)
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet business requirements
- Identifying, designing, and implementing internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Work with Data, Analytics & Tech team to extract, arrange and analyze data
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
- Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
- Works closely with all business units and engineering teams to develop a strategy for long-term data platform architecture.
- Working with stakeholders including data, design, product, and executive teams, and assisting them with data-related technical issues
- Working with stakeholders including the Executive, Product, Data, and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
- SQL
- Ruby or Python(Ruby preferred)
- Apache-Hadoop based analytics
- Data warehousing
- Data architecture
- Schema design
- ML
- Prior experience of 2 to 5 years as a Data Engineer.
- Ability in managing and communicating data warehouse plans to internal teams.
- Experience designing, building, and maintaining data processing systems.
- Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions.
- Excellent analytic skills associated with working on unstructured datasets.
- Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata.
YOU'LL BE OUR : Data Scientist YOU'LL BE BASED AT: IBC Knowledge Park, Bangalore
YOU'LL BE ALIGNED WITH :Engineering Manager
YOU'LL BE A MEMBER OF : Data Intelligence
WHAT YOU'LL DO AT ATHER:
-
Work with the vehicle intelligence platform to evolve the algorithms and the platform enhancing ride experience.
-
Provide data driven solutions from simple to fairly complex insights on the data collected from the vehicle
-
Identify measures and metrics that could be used insightfully to make decisions across firmware components and productionize these.
-
Support the data science lead and manager and partner in fairly intensive projects around diagnostics, predictive modeling, BI and Engineering data sciences.
-
Build and automate scripts that could be re-used efficiently.
-
Build interactive reports/dashboards that could be re-used across engineering teams for their discussions/ explorations iteratively
-
Support monitoring, measuring the success of algorithms and features build and lead innovation through objective reasoning and thinking Engage with the data science lead and the engineering team stakeholders on the solution approach and draft a plan of action.
-
Contribute to product/team roadmap by generating and implementing innovative data and analysis based ideas as product features
-
Handhold/Guide team in successful conceptualization and implementation of key product differentiators through effective benchmarking.
HERE'S WHAT WE ARE LOOKING FOR :
• Good understanding of C++, Golang programming skills and system architecture understanding
• Experience with IOT, telemetry will be a plus
• Proficient in R markdown/ Python/ Grafana
• Proficient in SQL and No-SQL
• Proficient in R / Python programming
• Good understanding of ML techniques/ Sparks ML
YOU BRING TO ATHER:
• B.E/B.Tech preferably in Computer Science
• 3 to 5 yrs of work experience as Data Scientist
• Responsible for developing and maintaining applications with PySpark
Must-Have Skills:
Responsibilities
- Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
- Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
- Entrepreneurial and out-of-box thinking essential for a technology startup
- Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability
Requirements
- In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers
- Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction
- Ability to understand, optimize and debug imaging algorithms
- Understating and experience in openCV library
- Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.)
- Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data.
- Understanding of engineering principles and a clear understanding of data structures and algorithms
- Experience in writing production level codes using either C++ or Java
- Experience with technologies/libraries such as python pandas, numpy, scipy
- Experience with tensorflow and scikit.