11+ AIM Jobs in Pune | AIM Job openings in Pune
Apply to 11+ AIM Jobs in Pune on CutShort.io. Explore the latest AIM Job opportunities across top companies like Google, Amazon & Adobe.

Experience in CyberArk products including EPV (Vault),PVWA, PSM, PTA, PSMP ,AIM/AAM,JIT, PSM
Creating custom plug-ins/API
Custom development through Auto IT
Integration with Sail point IIQ & Service now
CyberArk certification is preferred
People/team management skills
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
As a Salesforce Developer at Gruve, you'll be instrumental in designing, developing, and implementing cutting-edge solutions for our diverse clients. You'll leverage the full power of the Salesforce platform, including Sales Cloud, Service Cloud, Experience Cloud, and more, to craft exceptional customer experiences. This is your chance to contribute to high-impact projects, collaborate with top-tier talent, and accelerate your career in the cloud computing space using AI.
Key Roles & Responsibilities:
- Develop & Implement: Design, develop, and deploy robust and scalable Salesforce solutions using Apex, Aura and LWC. Contribute to the architecture of complex systems.
- Multi-Cloud Expertise: Implement and customize Salesforce Sales Cloud, Service Cloud, and Experience Cloud. Explore and leverage other Salesforce clouds (e.g., Marketing Cloud, Data Cloud, Revenue Cloud, Health Cloud, Financial Services Cloud) for comprehensive solutions.
- Integrations: Develop and maintain integrations between Salesforce and other systems using REST and SOAP APIs Mulesoft (or similar middleware solutions) experience is essential.
- DevOps: Contribute to our DevOps culture using tools like Copado for CI/CD.
- Client Collaboration: Work directly with clients to understand their needs, translate them into technical specifications, and present solutions.
- Problem Solving: Proactively identify and resolve technical challenges.
- Continuous Learning: Stay up-to-date with the latest Salesforce releases and best practices.
- Agentforce Optimization: Utilize Agentforce to improve agent productivity and customer experience.
Basic Qualifications:
- 3+ years of hands-on Salesforce development experience
- Deep proficiency in Apex, Aura, and LWC
- Proven multi-cloud experience including, but not limited to, Sales Cloud, Service Cloud, and Experience Cloud
- Strong understanding of Salesforce architecture and best practices
- Extensive experience with REST and SOAP API integrations
- Hands-on experience with Mulesoft (or similar middleware solutions)
- Experience with Copado for CI/CD is essential
- Agentforce experience is highly desirable
- Salesforce Data Cloud experience is a plus
Preferred Qualifications
- A bachelor’s or master’s degree in computer science, engineering or a related field.
- Salesforce Certified Platform Developer I & II (Required)
- Salesforce Certified Sales Cloud Consultant (Preferred)
- Salesforce Certified Service Cloud Consultant (Preferred)
- Salesforce Certified Experience Cloud Consultant (Preferred)
- Mulesoft Certified Developer (Preferred)
- Salesforce Data Cloud (Preferred)
Job Role: SAP FICO
Experience: 5+
Location: Pan India
Notice Period: Immediate to 15 Days
Position: C2H
Job Description:
o CA/ICWA candidates are preferred but otherwise should be a Post Graduate as minimum educational qualification like MBA/MCom, etc o Should have 5-8 plus years of experience in SAP FI and CO with S4Hana experience. Add on with FSCM, Funds Management experience would be added advantage o Must have at least 2 Implementation or support projects on S4 Hana with experience in Product costing and COPA in Controlling module o Should have at least 4 plus end-to-end Implementations and Support experience.
o SAP Finance (S4 Hana) – Basic knowledge in FI submodules- GL, AR, AP & Assets
o SAP Finance – Month end closing activities, Validations and Substitutions & Reporting. o Should have worked on two Implementation projects in Controlling in S4 Hana. o SAP Controlling -Should have hands on experience in Overheads Cost Controlling, in product costing like product cost planning, cost object controlling and actual costing & ML, Profitability analysis, COPA Planning, Settlement, Month end closing process
o Integration between FICO and other core modules like MM/ SD / PP /PS o Should have strong Domain experience in Finance o Leadership experience, minimum 3 years in a team lead role. Strong executive presence and ability to interact with Customer Top Management
o SAP Certification in preferred o Good Business process understanding
o Knowledge of SAP Best practices and building blocks o Should be able to design and configure business scenarios o Solution focused - be able to provide solutions
o Knowledge in User exits, BAPI and uploading tools like LSMW, BDC & LTMC. o Developing functional specifications for new developments / change requests
o Day to Day monitoring of tickets and work towards closure
o Analyzing the issues and provide estimates o Development of functional specification documents for changes and enhancements o Issue Resolution based on SLA o Co-ordinate with technical team to resolve the issues in the given objects
o Must be a very good team player with good interpersonal and communication skills
o Business Travel: Project specific travelling is mandatory for all the SAP Consultants
That is what makes us special: • Team-oriented corporate culture, collaboration as equals and steady knowledge transfer • Active participation in shaping your future • Individually tailored mentoring program • Sustainable career support with our career model and individual development programs • International project opportunities and network


Wednesday is a digital agency. We work with startups and enterprises to build digital products for their users.
At our core, we are a group of makers - designers, developers, product & project managers. We care about our work and think of it as a craft. We're always learning so we can build better and faster.
As a Technical Lead, your time will be divided between programming and technical oversight.
Core Responsibilities
- Code Reviews: Review all pull requests to ensure features are built correctly following the conventions and guidelines of the project.
- Communication: Work with your team to ensure they understand all the requirements clearly.
- Architecture: Have a clear picture of the system architecture in mind. Lead your teams to implement that design.
- Learn: Learn from the practices followed by other teams and evangelize your learnings.
Skills Required
- Technical expertise with our tech stack - JS, Nodejs, React, and AWS.
- Understanding of Agile processes and methodologies.
- Understanding of the continuous software delivery process using CD.
- The ability to debug corner cases, hypothesize and fix bugs.
Skill: Dellboomi Integration Developer
Work Mode: 6 Months
Location: Gurugram / Hyderabad / Pune / Delhi / Noida
Technical Skills: Dellboomi Integration, Groovy Script, Java Script, ATOM.
Job Description
- Bachelor’s Degree in computer science or relevant degree
- 3-5 years of experience in integration development.
- Experience with modern Groovy Script, JavaScript libraries and frameworks such as XLST.
- Strong communication skills with the ability to effectively interface with clients
- Good business analysis/design skills.
- Excellent time management and organizational skills.
- API / REST / SOAP experience
- Able to work with project management systems and time allocation.
- Good knowledge of Atom Management and Deployment


XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
Designation:Linux/System/Support Engineer (L2) Experience : 2-5 Yrs
Notice period : immediate to 30 Days
- Server Monitoring
- Deployments
- Collecting information about the reported issues
- Ensuring whether all the information has been logged in the ticketing system or not
- Must be able to follow and execute instructions specified in user guides, emails to run, monitor and trouble shoot
- Must be able and willing to document activities, procedures
- Must have trouble shooting skills and have knowledge on Antivirus, Firewall, Gateway
- Should be ready to work for extended shifts, if
- Good customer management skills bundled with good communication skills
- Databases: concepts and ability to use DB tools such as psql,
- Good Understanding of Oracle, weblogic, Linux/ Unix Terminology and able to execute commands
- Internet Technologies : Tomcat/ apache concepts, basic html, etc
- Able to use MS- Excel, Power Point


Experience required - 3 year(minimum) in MERN Stack
Salary - 15-20 LPA
About the Company
The company is one of the fastest-growing B2B SAAS Marketplace to procure industrial materials. The startup is generating an Annual Revenue of Rs. 100 Crore. You will get to work in a fast-paced environment with a brilliant agile tech team led by top engineers.
Location: Pune, 3 months Remote then Work from Office or Hybrid Working
Qualifications & Criteria
1. 3+ Years of development experience in the full-stack development (preferred MERN technology stack). Should be proficient in working with technologies like JavaScript, CSS, and JS with frameworks like NodeJs and ReactJS.
2. Should have good knowledge of databases like Linux.
3. As part of the brains of the startup, You are smart, creative, and love solving business challenges and thereby find new ways to propel the growth of the company.
4. You are passionate about the growth and want to become a future technical leader within the company. Your work and attitude should reflect your commitment for the same.
Responsibilities:
1. Should be able to work independently once proper guidance is provided.
2. Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
3. Ensure timely completion of development activity.
4. Prepare training documents and provide training to internal teams on tools.
5. Consult, design, and develop well-structured, scalable in-house tools.
6. Should be able to write optimized business logic for business functions