
Job Description
- Gather & document requirements using tools like interviews, workshops, surveys, business process descriptions, use cases, etc.
- Interact with clients to do requirements gathering that will include the usage of various elicitation techniques like Document Analysis, Requirement Workshops, Focused Groups etc.
- Conduct business analysis and research to identify key metrics and opportunities for improvements
- Create various Scope Management Artifacts like WBS, Product Backlogs, Feature Sets, MRDs, PRDs, FRDs, Use Cases, Business Cases, User Stories etc.
- Provide vision and direction to the Agile development team and stakeholders throughout the project by leading the planning of product release plans and setting expectations for the delivery of new functionalities.
- Analyze the practicality of development requirements of new systems and upgrades to existing systems.
- Manage Project Scope and Scope creeps.
- Raise Change Requests, interact with the Dev team and help in coming up with Impact analysis and estimates.
- Run the Capability Gaps analysis and deliver the report against it.
- Define the Business Case that can further result in Project Charter.
- Prioritize Requirements independently or in collaboration with Dev Team/Stakeholders using various prioritization techniques.
- Run a project in SCRUM Mode and write User Stories with every nitty-gritty even to a system level wherever required.
- Write and manage the scope of Jira and Confluence.
- Participate in user acceptance testing.
Requirements
- 2+ years of experience in a similar role for web/mobile projects.
- Excellent documentation, verbal & written communication, and team collaboration skills.
- Sharp analytical and problem-solving skills.
- Outstanding presentation and leadership skills.
- Experience in iterative development methodologies like Agile.
- Technical Writing
- Experience with project management tools (Mention tools we are using or required )

Similar jobs
Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA)
About URUS
We are the URUS family (US), a global leader in products and services for Agritech.
SENIOR DATA ENGINEER
This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles.
Responsibilities
Data Integration
- Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks.
- Troubleshoot and resolve Databricks pipeline errors and performance issues.
- Maintain legacy SSIS packages for ETL processes.
- Troubleshoot and resolve SSIS package errors and performance issues.
- Optimize data flow performance and minimize data latency.
- Implement data quality checks and validations within ETL processes.
Databricks Development
- Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL.
- Migrate legacy SSIS packages to Databricks pipelines.
- Optimize Databricks jobs for performance and cost-effectiveness.
- Integrate Databricks with other data sources and systems.
- Participate in the design and implementation of data lake architectures.
Data Warehousing
- Participate in the design and implementation of data warehousing solutions.
- Support data quality initiatives and implement data cleansing procedures.
Reporting and Analytics
- Collaborate with business users to understand data requirements for department driven reporting needs.
- Maintain existing library of complex SSRS reports, dashboards, and visualizations.
- Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies.
Collaboration and Communication
- Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams.
- Collaborate effectively with business users, data analysts, and other IT teams.
- Communicate technical information clearly and concisely, both verbally and in writing.
- Document all development work and procedures thoroughly.
Continuous Growth
- Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies.
- Continuously improve skills and knowledge through training and self-learning.
This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned.
Requirements
- Bachelor's degree in computer science, Information Systems, or a related field.
- 7+ years of experience in data integration and reporting.
- Extensive experience with Databricks, including Python, Spark, and Delta Lake.
- Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions.
- Experience with SSIS (SQL Server Integration Services) development and maintenance.
- Experience with SSRS (SQL Server Reporting Services) report design and development.
- Experience with data warehousing concepts and best practices.
- Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable.
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal skills.
- Ability to work independently and as part of a team.
- Experience with Agile methodologies.
Must have:
- 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
- Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
- Good understanding of Generics, OOPs concepts & Design Patterns
- Solid engineering and coding skills. Ability to write high-performance production quality code in Python
- Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
- Strong expertise in time series forecasting using stat, ML, DL and foundation models
- Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
- Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
- Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
- Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
- Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
- Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders
Good To Have:
- Experience with MLOps tools and practices for continuous integration and deployment of ML models.
- Has familiarity with deploying applications on Kubernetes
- Knowledge of supply chain management principles and challenges.
- A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred
1. Work on generating and nurturing leads for the organization using different marketing channels
2. Generate new leads using LinkedIn Sales Navigator and send InMail to potential prospects
3. Build and cultivate prospect relationships by initiating communications and conducting follow-up contacts to move opportunities through the sales funnel
4. Work on doing regular follow-ups with clients over LinkedIn and email
5. Generate new leads using cold calling, email marketing, social media, and other relevant marketing channels
6. Organize and keep the lead status updated in the CRM software
7. Understand the pain points faced by the prospects during communication and identify if they're looking for specific features
8. Check for competitor products mentioned/used by leads and prospects during communication and the intent behind using them
Who can apply:
1. Interpersonal & communication abilities
2. Good knowledge of internet searching, profile/data searching
3. Strong communication skills are indispensable for lead generation
4. Computer literacy
5. A street-smart candidate
6. Amazing research skills
7. Basic knowledge of lead-generation tools will be a plus
About Us
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
Requirements
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
About the Company
- 💰 Early-stage, ed-tech, funded, growing, growing fast
- 🎯 Mission Driven: Make Indonesia competitive on a global scale
- 🥅 Build the best educational content and technology to advance STEM education
- 🥇 Students-First approach
- 🇮🇩 🇮🇳 Teams in India and Indonesia
Skillset 🧗🏼♀️
- You primarily identify as a DevOps/Infrastructure engineer and are comfortable working with systems and cloud-native services on AWS
- You can design, implement, and maintain secure and scalable infrastructure delivering cloud-based services
- You have experience operating and maintaining production systems in a Linux based public cloud environment
- You are familiar with cloud-native concepts - Containers, Lambdas, Orchestration (ECS, Kubernetes)
- You’re in love with system metrics and strive to help deliver improvements to systems all the time
- You can think in terms of Infrastructure as Code to build tools for automating deployment, monitoring, and operations of the platform
- You can be on-call once every few weeks to provide application support, incident management, and troubleshooting
- You’re fairly comfortable with GIT, AWS CLI, python, docker CLI, in general, all things CLI. Oh! Bash scripting too!
- You have high integrity, and you are reliable
What you can expect from us 👌🏼
☮️ Mentorship, growth, great work culture
- Mentorship and continuous improvement are a part of the team’s DNA. We have a battle-tested robust growth framework. You will have people to look up to and people looking up to you
- We are a people-first, high-trust, high-autonomy team
- We live in the TDD, Pair Programming, First Principles world
🌏 Remote done right
- Distributed does not mean working in isolation, feeling alone, being buried in Zoom calls
- Our leadership team has been WFH for 10+ years now and we know how remote teams work. This will be a place to belong
- A good balance between deep focussed work and collaborative work ⚖️
🖥️ Friendly, humane interview process
- 30-minute alignment check and screening call
- A short take-home coding assignment, no more than 2-3 hours. Time is precious
- Pair programming interview. Collaborate, work together. No sitting behind a desk and judging
- In-depth engineering discussion around your skills and career so far
- System design and architecture interview for seniors
What we ask from you👇🏼
- Bring your software engineering — both individual brilliance and collaborative skills
- Bring your good nature — we're building a team that supports each other
- Be vested or interested in the company vision
• Edit videos, add music, subtitles, and create thumbnails
• Create transitions, titles, graphics, and other supporting materials for the video
• Create graphics on Canva/Photoshop for Instagram & FB posting
• Conceptualize and present ideas as per the brief
• Contribute to the conceptualization of projects, and come up with ideas, and contribute to the scripting process
• Edit content in collaboration with the content team
- Experience in Java Programming with Selenium and Mobile Application (Appium).
- Experience in designing developing data quality automation and executing test plans.
- Experience in CI/CD and developing Test Automation Tools for Data Quality Assessment.
Knowlarity Communications is India's largest cloud-based solutions provider. Our virtual phone system and enterprise solutions help make your business reliable and intelligent. With the capability to process over a million calls an hour, Knowlarity is a trusted brand for more than 8000 companies worldwide, SMBs as well as enterprises. We are funded by Sequoia Capital and Mayfield, headquartered in Singapore and have offices in Gurgaon, Mumbai, Bangalore, Dubai and the Philippines. Knowlarity solves business problems by making telephony intelligent and reliable in real time over the cloud, for Enterprises.
Must Have:
Languages : C, Python
DataBase: MySQL, PostgreSQL
Tools: Git
Operating System: Linux
Protocols: SIP, RTP, WebRTC
Good to have:
Languages : AWS, GCP, Azure (Cloud services)
Tools: FreeSWITCH, Asterisk, OpenSIP
We offer:
- A competitive salary and extensive social benefits
- Opportunity to be part of a team that invented and dominates the emerging market in the cloud telephony industry.
- Massive opportunities for growth.
- Work from a prime location - easy accessibility from both Gurgaon and Delhi.
- Work-life balance and support for career development.
- An amazing life inside the Knowlarity! Want to know more about it
Then let's stay connected!
https://www.facebook.com/Knowlarity/" target="_blank">https://www.facebook.com/
https://twitter.com/knowlarity" target="_blank">https://twitter.com/knowlarity
https://www.linkedin.com/company-beta/410771/" target="_blank">https://www.linkedin.com/
- Maintaining and updating databases in Excel, Mailchimp and other platforms
- Executing marketing campaigns on Mailchimp and other email marketing platforms
- Montoring calendars and booking in meetings and calls for the management team
- Competitor research, analysis and reporting
- Liaising and co-ordinating with suppliers including design agencies, hotels, publications and others
- Ad hoc support to team including assisting with events and promotional activity







