
-A responsible and passionate professional who has will power to drive the product goals and ensure the outcomes expected from the team.
- He / She should have a strong desire and eagerness to learn new and emerging technologies.
- Skills Required :
- Python/Node JS/Django Rest Framework,
- Database Structure
-Cloud-Ops - AWS / Azure
Roles & responsibilities :-
- Developer responsibilities include writing and testing code, debugging programs
- Design and implementation of REST API
- Build, release, and manage the configuration of all production systems
- Manage a continuous integration and deployment methodology for server-based technologies
- Identify customer problems and create functional prototypes offering a solution
If you are willing to take up challenges and contribute in developing world class products, - this is the place for you.
About FarmMobi :
A trusted enterprise software product company in AgTech space started with a mission to revolutionize the Global agriculture sector.
We operate on software as a service model (SASS) and cater to the needs of global customers in the field of Agriculture.
The idea is to use emerging technologies like Mobility, IOT, Drones, Satellite imagery, Blockchain etc. to digitally transform the agriculture landscape.

About iBizlogic
About
Connect with the team
Similar jobs
Require Mechanical Sales Engineer with a good technical knowledge.
Bachelor’s degree in Mechanical Engineering or a related field.
• 0-2 years of experience in technical or industrial sales (experience range may vary by employer).
• Strong understanding of mechanical systems, components, or equipment (e.g., pumps, compressors, HVAC systems, valves, etc.).
• Excellent communication, negotiation, and presentation skills.
• Ability to translate complex technical information into customer-friendly language.
• Proficiency in CRM and MS Office tools.
• Self-motivated, target-driven, and able to work independently.
• Willingness to travel for client meetings and site visits.
Profile: AWS Data Engineer
Mandate skills :AWS + Databricks + Pyspark + SQL role
Location: Bangalore/Pune/Hyderabad/Chennai/Gurgaon:
Notice Period: Immediate
Key Requirements :
- Design, build, and maintain scalable data pipelines to collect, process, and store from multiple datasets.
- Optimize data storage solutions for better performance, scalability, and cost-efficiency.
- Develop and manage ETL/ELT processes to transform data as per schema definitions, apply slicing and dicing, and make it available for downstream jobs and other teams.
- Collaborate closely with cross-functional teams to understand system and product functionalities, pace up feature development, and capture evolving data requirements.
- Engage with stakeholders to gather requirements and create curated datasets for downstream consumption and end-user reporting.
- Automate deployment and CI/CD processes using GitHub workflows, identifying areas to reduce manual, repetitive work.
- Ensure compliance with data governance policies, privacy regulations, and security protocols.
- Utilize cloud platforms like AWS and work on Databricks for data processing with S3 Storage.
- Work with distributed systems and big data technologies such as Spark, SQL, and Delta Lake.
- Integrate with SFTP to push data securely from Databricks to remote locations.
- Analyze and interpret spark query execution plans to fine-tune queries for faster and more efficient processing.
- Strong problem-solving and troubleshooting skills in large-scale distributed systems.
1+ years of proven experience in ML/AI with Python
Work with the manager through the entire analytical and machine learning model life cycle:
⮚ Define the problem statement
⮚ Build and clean datasets
⮚ Exploratory data analysis
⮚ Feature engineering
⮚ Apply ML algorithms and assess the performance
⮚ Codify for deployment
⮚ Test and troubleshoot the code
⮚ Communicate analysis to stakeholders
Technical Skills
⮚ Proven experience in usage of Python and SQL
⮚ Excellent in programming and statistics
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium, Postman, Airflow, PySpark
1. Collaborate with the team to develop design concepts and visual solutions.
2. Work with our draping software to create virtual draping experiences for our customers.
3. Assist in creating graphics for our products.
4. Assist in video editing and video making tasks to produce engaging promotional content for our brand
5. Apply your knowledge of digital marketing and Instagram marketing to strategize and execute effective social media campaigns that drive brand awareness and engagement.
6. Embrace design thinking principles to solve complex creative challenges and contribute innovative ideas to our design process.
7 Utilize Canva to develop aesthetically pleasing and visually appealing graphics for our online platforms and marketing materials.
8.Assist in creating social media calendar and Graphics by maintaining brand voice and identity.
Job Description:
As Azure Lead Data Engineer, you will have to take over the key activities of designing, coding, and implementing data solutions in the platform. With a focus on leveraging DBT for data transformation and modelling, as well as expertise in MDM tools, you will play a pivotal role in architecting scalable and performant data pipelines and warehouses. You will collaborate closely with cross-functional teams to understand business requirements, architect data solutions, and ensure successful project delivery. You will be leading a team of skilled engineers who will collectively deliver scalable, highly dependable data solutions that can cater to the customers.
Responsibilities:
- Lead the design, development, and implementation of data solutions on the Microsoft Azure platform.
- Architect data pipelines, data warehouses, and data lakes using Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Blob Storage.
- Design and implement ETL processes to extract, transform, and load data from various sources into Azure data platforms, utilizing DBT for data transformation.
- Develop scalable and efficient data models to support analytics, reporting, and machine learning initiatives, with a strong emphasis on using DBT for modelling.
- Lead performance optimization efforts to ensure the efficient processing of large volumes of data.
- Mentor and coach junior team members, providing guidance on best practices, technical expertise, and professional development.
- Collaborate with stakeholders to understand business requirements and translate them into technical solutions.
- Stay abreast of emerging technologies and industry trends in data engineering and cloud computing.
Qualifications:
- BE Computer Science or a related field.
- ~10 years of experience in data engineering, designing, and implementing data solutions on the Microsoft Azure platform.
- Deep understanding of Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Blob Storage.
- Proficiency in DBT (Data Build Tool) for data transformation and modelling.
- Experience working with any Master Data Management (MDM) tools.
- Experience with data governance and metadata management tools such as Azure Purview or similar.
- Proficiency in programming languages such as Python, Scala, or Java.
- Experience with big data technologies such as Hadoop, Spark, and Kafka.
- Strong leadership skills with the ability to lead and mentor a team of engineers.
- Supporting the COO
- Creating, managing, and organising his day-to-day tasks
- Keeping track of office activities, projects, and deadlines
- Managing Client coordination, Office administration, Research and reporting
- Giving daily / weekly briefings to the COO
- Communicating with clients, employees, and vendors on behalf of the COO
- Working closely with the COO to keep him well informed of upcoming commitments and responsibilities, following up appropriately
- Protecting the COO's time by being an effective gatekeeper
- Keeping records, organize files and information
- Keeping a bird's eye view of the customers and service fulfilment at all times
- Explaining and assigning the tasks to the team members.
- Delegating tasks to the team members and answering their questions
- Addressing customer complaints and tickets
- Escalating critical issues to the COO
- Supervising overall office administration, Overseeing staff attendance, leave requests, and other notifications
- Managing our Slack workspace: adding or removing members, organising channels, ensuring the team is comfortable with Slack
- Organise meetings, take detailed minutes, Planning actionable steps
- Handling Issue announcements
- Undertaking Web research to find answers, solutions, drafting research notes
- Drafting briefs, memos, and action plans
- Organising information into presentations, reports, and actionable insights
What you need to have:
- Preferably MBA (HR)
- Well-spoken, highly articulate
- Outstanding verbal and written communication skills
- Should be able to convey complex ideas in a simple, structured, and concise language.
- Ambitious, driven, self-motivated
- Highly disciplined, with a strong work ethic
- Cheerful and good-natured
- Friendly, professional demeanour
- Exceptional interpersonal skills
- Tech-savvy: Should be able to pick up new software and tools, apps, websites, etc
• Strong knowledge of data structure and algorithms
• Ability to write complex SQL
• Familiarity with Test driven development and Continuous Integration
• Strong knowledge and hands-on with code development tools (Eclipse, GIT, Jenkins, Unit, Testing Frameworks)
• Familiar with Software development methodology like Agile methodology
• Knowledge of JavaScript and AngularJS would be a plus
• Knowledge of Java and Python
• Knowledge of Unix and shell scripting would be a plus
• Strong leadership skills
• Desire to learn and develop new tools and techniques and share with the team
• Capability to mentor junior team members
• Active involvement is required in all phases of Software Development
• Ability to establish trusted partnership with executive level stakeholders









