
Requires 9 hours
-Work from Office( Balewadi High Street) Involved
Day-to-day responsibilities include:
1. Research and post on a LinkedIn community we are building
2. Conduct market research into social media trends, influencers, ETC
3. Explore AI tools & SaaS tools that can automate and scale current efforts
4. Come up with creative strategies to improve engagement/impressions
5. Research on competitors' outreach strategies.

About Spry Healthcare
About
Similar jobs

We are seeking an experienced ELK Stack & APM Engineer to design, implement, and maintain our logging, monitoring, and application performance management infrastructure. The ideal candidate will have deep expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) and Application Performance Monitoring (APM).
Key Responsibilities
- Design, deploy, and maintain production-grade Elasticsearch clusters, ensuring high availability, performance, and scalability
- Implement and optimize log ingestion pipelines using Logstash and Beats
- Create and maintain Kibana dashboards, visualizations, and alerts for operational intelligence
- Configure and manage APM servers to monitor application performance metrics
- Develop and maintain data retention policies and implement data lifecycle management
- Troubleshoot performance issues and optimize cluster resources
- Implement security best practices and access controls across the ELK stack
- Automate deployment and configuration management using Infrastructure as Code
- Provide technical guidance and support to development teams for log integration
- Conduct capacity planning and resource optimization
Required Qualifications
- 2+ years of experience with Elasticsearch, including cluster management and optimization
- Strong knowledge of Logstash configuration, pipeline development, and data transformation
- Expertise in creating Kibana visualizations, dashboards, and implementing alerting
- Experience with APM implementation and troubleshooting
- Proficiency in one or more scripting languages (Python, Ruby, Bash)
- Strong understanding of logging architectures and best practices
- Familiarity with monitoring tools and observability practices
About MyOperator:
MyOperator is India's top cloud communications provider, offering a comprehensive SAAS platform to 10,000+ businesses, including IRCTC, Razorpay, and Amazon. Our services include Cloud Call Center, IVR, Toll-free Numbers, and Enterprise Mobility. We've recently ventured into selling WhatsApp Business Solutions, alongside launching Heyo Phone, an SMB-focused conversation app, backed by super-angels Amit Chaudhary and Aakash Chaudhry. Awarded for ease of use and exceptional customer service,
MyOperator leads India's cloud communications segment. Explore our solutions like call masking, call confirmation, and multi-store at myoperator.com.
Job Description
Responsibilities:
● Conduct in-depth research to identify and prospect for qualified leads.
● Utilize various channels such as email, phone, and social media to connect with potential customers.
● Effectively qualify leads through needs discovery conversations to understand their challenges and pain points.
● Develop strong communication skills to present product features and benefits compellingly.
● Maintain accurate records of all lead interactions within the CRM system.
● Contribute to the continuous improvement of the sales development process.
● Product Expertise: Maintain an in-depth knowledge of MyOperator's product offerings, staying up-to-date with new features and capabilities.
● Consultative Selling: Engage in consultative selling, proactively identifying opportunities to enhance the customer's communication infrastructure.
● CXO Interaction: Conduct effective discussions with CXOs and key decision-makers to influence their adoption of MyOperator solutions.
● Market Insights: Stay informed about industry trends, competitor activities, and market dynamics to make informed sales decisions.
● Sales Collateral: Develop and deliver compelling sales presentations, proposals, and other collateral to effectively communicate the value proposition of MyOperator.


We are seeking an experienced and highly skilled Python Developer with some experience in Laravel to spearhead our offshore team, in managing a large-scale USA-based product platform.
As a Senior Python Developer, you will primarily focus on building efficient data-driven applications and robust web scraping solutions. You will leverage your experience with the Scrapy framework for large-scale data collection and processing. While Python is the primary language, some experience in Laravel will be essential to contribute to PHP-based projects, allowing for full-stack support when needed.
This role requires a proactive, resourceful developer who can think on their feet and efficiently tackle complex problems.
Key Responsibilities
- Scrapy Web Scraping: Develop, optimize, and maintain web scrapers using the Scrapy framework to gather, process, and store data from various online sources.
- Back-End Development: Design, develop, and implement scalable backend solutions in Python, focusing on reliability, efficiency, and high performance.
- Laravel Integration (As Needed): Contribute to existing Laravel projects, working with the PHP framework to support API endpoints, and ensure seamless integration with Python-based services.
- API Development & Integration: Build and integrate RESTful APIs to facilitate seamless communication between applications and external data sources.
- Data Processing & Storage: Create efficient data processing pipelines and manage SQL/NoSQL databases to store and organize large datasets collected through Scrapy.
- System Architecture: Work collaboratively with other developers to design scalable, resilient architectures that support data-heavy applications.
- Continuous Integration & Automation: Use Python to automate tasks, optimize CI/CD workflows, and enhance deployment processes for faster and more reliable releases.
- Mentorship & Code Review: Mentor junior developers, conduct code reviews, and contribute to maintaining high coding standards and best practices.
Qualifications
Experience
- 5+ years of professional experience in Python development, with a strong focus on web scraping using Scrapy.
- 1-2 years of experience with Laravel (intermediate understanding is acceptable for full-stack support).
- Demonstrated experience with data-driven projects, showcasing resourceful problem-solving abilities.
Technical Skills:
- Python: Advanced proficiency in Python, particularly with libraries for data manipulation and web scraping.
- Scrapy Framework: Extensive experience in Scrapy, including creating custom spiders, handling complex data extraction requirements, and working with proxies and data pipelines.
- Laravel (PHP): Familiarity with Laravel for backend development, including API creation, routing, and MVC structure.
- Database Management: Proficiency in SQL and NoSQL databases for efficient data storage and retrieval.
- API Development: Hands-on experience with RESTful APIs and integrating external services.
- Automation & DevOps: Familiarity with CI/CD tools, Docker, and version control systems (Git).
- Cloud Platforms: Experience with cloud platforms like AWS, Google Cloud, or Azure is a plus.
Why Join Us?
- High-Impact Role: Work on projects where your expertise in Scrapy and Python will directly influence our data capabilities and client success.
- Growth Opportunities: Access to learning resources, mentorship, and a supportive environment for continuous professional development.
- Innovative Culture: Join a team that values out-of-the-box thinking, resourcefulness, and efficient problem-solving.



We are looking for Principal Engineers, who are strong individual contributors with
expertise and passion in solving difficult problems in many areas.
Your day at nference,
• Acting as an entrepreneur - taking ownership the problem statement end-to-end
• Delivering direct value to the customer - and not just stop with delivery
• Estimate, plan, divide and conquer customer problem statements - through sturdily
developed & performant technical solutions
• Handle multiple competing priorities and ambiguity - all in a fast-paced, high growth
environment Qualities Which We Look For In The Ideal Candidate
• 6-8 years of experience in building High Performance Distributed Systems
• Proven track record in building backend systems from scratch
• Excellent coding skills (preferably any two of C/C++/Python and Go)
• Good depth in Algorithms & Data Structures
• Good understanding of OS level concepts
• Experience working on DevOps tools for deployment, monitoring etc. like Ansible, ELK
Prometheus etc
• Wide knowledge of different Technologies like Databases, Messaging Systems etc
• Experience building complex technical solutions - highly scalable service-oriented
architectures, distributed cloud-based systems - which power our products
Benefits:
• Be a part of “Google of biomedicine” as recognized by the Washington Post
• Work with some of the brilliant minds of the world solving exciting real-world
problems through Artificial Intelligence, Machine Learning, analytics and insights
through triangulating unstructured and structured information from the biomedical
literature as well as from large-scale molecular and real-world datasets.
• Our benefits package includes the best of what leading organizations provide, such as
stock options, paid time off, healthcare insurance, gym/broadband reimbursement
Role : Talend developer
Location : Coimbatore
Experience : 4+Years
Skills : Talend, any DB
Notice period : Immediate to 15 Days
Role and Responsibilities:
- As a backend developer, your primary focus will be the development of all server-side systems
- A basic understanding of front-end technologies is necessary as well. You will test, secure and deploy your code
- Work experience on Node.js is a must along with a server-side framework like Express.js
- Strong proficiency in JavaScript
- Writing reusable, testable, and efficient code
- Experience and proficiency integrating with REST APIs
- Understanding of scalable computing systems, software architecture, data structures, and algorithms
- Experience in working with databases such as MongoDB, Redis, Elasticsearch, etc.
- AgileScrum development cycle understanding.
Skills Required:
- At least 2 years of experience developing backends using NodeJS should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Good knowledge of MongoDB(Must) & any other MySQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience developing and deploying REST APIs.
- Knowledge and working experience in Cloud environment - AWS or Azure
- Good knowledge of Unit Testing and available Test Frameworks.
- Should be a fast learner and a go-getter without any fear of trying out new things



Responsibilities:
- Write well designed, testable, scalable and efficient code and oversee assigned programs (e.g. conduct code review) and provide guidance to team members
- Apply design principles to ensure code modularity and error handling at appropriate levels
- Communicate software requirements to development teams
- Evaluate and select appropriate software or hardware and suggest integration methods
- Address technical concerns, ideas and suggestions and provide technological solution to it
- Monitor systems to ensure they meet both user needs and business goals
- Design the overall structure of the platform and oversee programs to ensure the proper implementation of the architecture
Desired Skills and Experience
- 2-7years of relevant experience in an online start-up with a record of substantial achievements
in the specific role
- Strong programming foundation in PHP, MySQL, Oops Concepts and can develop core PHP based applications
- Must have worked on MVC frameworks – Laravel, CodeIgniter, Cake PHP or Zend.
- Experience with JavaScript and jQuery for producing AJAX applications.
- HTML5, CSS3, XML knowledge is mandatory
- Ability to make modifications to the systems with little or no interruption of service
- Solid exposure to API integrations and familiar with various design & architectural patterns
- Understanding fundamental design principles behind a scalable application
- Ability to thrive in a fast-paced, deadline-driven environment




