11+ Pydev Jobs in Bangalore (Bengaluru) | Pydev Job openings in Bangalore (Bengaluru)
Apply to 11+ Pydev Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Pydev Job opportunities across top companies like Google, Amazon & Adobe.
Aviso is the AI Compass that guides Sales and Go-to-Market teams to close more deals, accelerate revenue growth, and find their True North.
We are a global company with offices in Redwood City, San Francisco, Hyderabad, and Bangalore. Our customers are innovative leaders in their market. We are proud to count Dell, Honeywell, MongoDB, Glassdoor, Splunk, FireEye, and RingCentral as our customers, helping them drive revenue, achieve goals faster, and win in bold new frontiers.
Aviso is backed by Storm Ventures, Shasta Ventures, Scale Venture Partners and leading Silicon Valley technology investors
What you will be doing:
● Work in nosetests Test Case Generation Framework which has assert Functions for various data types.
● Maintain or Re-engineer Automation Test Frameworks bug fixes and patch sets for existing applications.
● Work in Object Oriented Concepts in Python 2 & 3 .
● Work in several python packages like Celery, selenium, psycopg, pymongo, boto
● Experience in Version Control system like GIT.
● Experience in Development , Debugging, Enhancement and Maintenance of
applications and Tools .
● Good Experience in debugging the issues using debuggers like pydevd , pdb, ptvsd.
● work experience on AWS (EC2, S3)
● Excellence in communicating with diverse teams to take a project from start to finish; collaborating with various teams to develop & support internal data platform and ongoing analyses.
● work with Proof of Concept in Integration Testing using Vue Test Utils.
● Generate code coverage report for test cases
● Independently design and execute internal tools for infrastructure support.
What You Bring:
● 7+ years of experience in automation testing
● Understand requirements Business Function-lists and Create System Test Strategy, and Planned test activities in line with Delivery Dates.
● Result-driven Quality Assurance professional with solid knowledge in the Testing
process of Software Development Life-cycle methodologies - Agile, Iterative, Scrum, and Waterfall models.
● Proficient in Test Strategy, Test Plan, Test Case Creation.
● Extensive work experience in Designing Automation Frameworks and Scripts in Skills of Selenium, Python, and Vue js.
● Experience in Microservices Testing.
● Excellent exposure on Quality Assurance Procedures and Tools Like JIRA
● Work Experience in Analytics technologies and Products like Aviso.
● Experience in working on multiple Platforms LINUX, MAC.
● Experience in utilizing Lean and Six Sigma best practices in identifying, improving and re-engineering Test Frameworks.
● Have flexibility and ability to learn and use new technologies and also to work in a team environment as well as independently to get things done.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
About Autonomize AI
Autonomize AI is on a mission to help organizations make sense of the world's data. We help organizations harness the full potential of data to unlock business outcomes. Unstructured dark data contains nuggets of information that when paired with human context will unlock some of the most impactful insights for most organizations, and it’s our goal to make that process effortless and accessible.
We are an ambitious team committed to human-machine collaboration. Our founders are serial entrepreneurs passionate about data and AI and have started and scaled several companies to successful exits. We are a global, remote company with expertise in building amazing data products, captivating human experiences, disrupting industries, being ridiculously funny, and of course scaling AI.
The Opportunity
We’re seeking a Senior DevOps Engineer to design, build, and secure our cloud infrastructure. You’ll play a key role in delivering scalable, highly secure systems — with a strong focus on Azure Cloud, Kubernetes, automation, observability, and cloud security best practices. Experience with Google Cloud is a plus.
What you'll do:
- Design, deploy, and maintain secure and scalable Kubernetes clusters in production.
- Develop and manage Helm charts for deploying applications securely.
- Implement GitOps workflows using ArgoCD, ensuring secure and auditable deployments.
- Set up and manage observability stacks, including Prometheus, Grafana, and Loki, for monitoring, alerting, and logging.
- Implement security best practices, including network policies, RBAC, pod security standards, and secrets management in Kubernetes.
- Automate infrastructure provisioning and security compliance using Terraform, Ansible, or Pulumi.
- Secure cloud infrastructure and enforce security policies in AWS, Azure, or GCP, focusing on IAM, encryption, VPC security, and firewall rules.
- Implement CI/CD pipelines with security scanning (SAST, DAST, container image scanning, and dependency management).
- Enhance system reliability, security, and performance through continuous monitoring, auditing, and automated remediation.
- Collaborate with development and security teams to ensure security and compliance in all DevOps processes.
- Respond to security incidents, conduct forensic analysis, and apply remediation measures.
You’re a Fit If You Have
- 6+ years of experience in DevOps, Site Reliability Engineering (SRE), or Cloud Engineering roles.
- Strong expertise in Kubernetes security, including RBAC, network policies, pod security, and secrets management.
- Hands-on experience with Helm for secure and automated Kubernetes deployments.
- Proficiency in ArgoCD and GitOps methodologies for managing infrastructure as code securely.
- Experience with observability tools such as Prometheus, Grafana, and Loki.
- Expertise in one or more cloud providers (AWS, Azure, or GCP), including IAM, VPC security, and compliance.
- Strong knowledge of Terraform, Ansible, or Pulumi for infrastructure security automation.
- Experience securing CI/CD pipelines using SAST, DAST, container security scanning (Trivy, Aqua, or Snyk).
- Proficiency in scripting languages like Bash, Python, or Go for security automation.
- Strong understanding of network security, firewall management, TLS, and certificate management.
- Experience with logging, security monitoring, SIEM solutions, and automated alerting.
Bonus Points
- Experience with Service Mesh security (Istio, Linkerd, or Consul).
- Hands-on experience with Zero Trust Security models and policy-as-code frameworks (OPA/Gatekeeper).
- Knowledge of container runtime security using tools like Falco or Sysdig.
- Familiarity with SOC 2, HIPAA or other compliance frameworks.
- Experience with incident response, forensic analysis , and security auditing
SDE 2 / SDE 3 – AI Infrastructure & LLM Systems Engineer
Location: Pune / Bangalore (India)
Experience: 4–8 years
Compensation: no bar for the right candidate
Bonus: Up to 10% of base
About the Company
AbleCredit builds production-grade AI systems for BFSI enterprises, reducing OPEX by up to 70% across onboarding, credit, collections, and claims.
We run our own LLMs on GPUs, operate high-concurrency inference systems, and build AI workflows that must scale reliably under real enterprise traffic.
Role Summary (What We’re Really Hiring For)
We are looking for a strong backend / systems engineer who can:
- Deploy AI models on GPUs
- Expose them via APIs
- Scale inference under high parallel load using async systems and queues
This is not a prompt-engineering or UI-AI role.
Core Responsibilities
- Deploy and operate LLMs on GPU infrastructure (cloud or on-prem).
- Run inference servers such as vLLM / TGI / SGLang / Triton or equivalents.
- Build FastAPI / gRPC APIs on top of AI models.
- Design async, queue-based execution for AI workflows (fan-out, retries, backpressure).
- Plan and reason about capacity & scaling:
- GPU count vs RPS
- batching vs latency
- cost vs throughput
- Add observability around latency, GPU usage, queue depth, failures.
- Work closely with AI researchers to productionize models safely.
Must-Have Skills
- Strong backend engineering fundamentals (distributed systems, async workflows).
- Hands-on experience running GPU workloads in production.
- Proficiency in Python (Golang acceptable).
- Experience with Docker + Kubernetes (or equivalent).
- Practical knowledge of queues / workers (Redis, Kafka, SQS, Celery, Temporal, etc.).
- Ability to reason quantitatively about performance, reliability, and cost.
Strong Signals (Recruiter Screening Clues)
Look for candidates who have:
- Personally deployed models on GPUs
- Debugged GPU memory / latency / throughput issues
- Scaled compute-heavy backends under load
- Designed async systems instead of blocking APIs
Nice to Have
- Familiarity with LangChain / LlamaIndex (as infra layers, not just usage).
- Experience with vector DBs (Qdrant, Pinecone, Weaviate).
- Prior work on multi-tenant enterprise systems.
Not a Fit If
- Only experience is calling OpenAI / Anthropic APIs.
- Primarily a prompt engineer or frontend-focused AI dev.
- No hands-on ownership of infra, scaling, or production reliability.
● The candidate will actively seek out new sales leads and business opportunities through active networking and
sending personal, strategic, value-add emails, calls, and social messages
● Use a combination of outreach mechanisms to nurture leads (Call, Email, Marketing automation tools like
outreach, Linkedin Inmails, etc. )
● Learn, leverage, and help evolve our demand generation process.
● Generate appointments by means of proactive outbound prospecting.
● Work directly with sales and marketing to discover opportunities from leads.
● Demonstrate and teach strong selling and influencing skills
● Generate new business opportunities to fuel the sales Pipeline for our products across our market segments
Skills And Qualification
● Candidate should be good at cold calling and writing cold emails
● Strong prospecting skills and ability to develop business in new accounts
● Familiar with sourcing prospect contact information using tools like ZoomInfo, Lusha, SalesNavigator, Apollo,
Slintel, etc
● Ability to think of creative ways of prospecting to make outbound more personalized
● Relationship-building with prospects
● Strong analytics skills to identify inefficiencies in order to improve it
● Self-starter who is able to operate in a hyper-growth environment
Overall 8+ years of experience in performance testing.
Should know App Dynamics Or any monitoring tools.
About us
- Innoviti (http://www.innoviti.com/">innoviti.com) is India’s largest provider of payment solutions to Enterprise merchants.
- Processing $10 Bn of payments from 2000 cities. We today have 76% of Enterprise market share.
- Enterprises such as Reliance, Landmark Group, Shoppers Stop, Pantaloons, Hamleys, Van Heusen, Louise Phillipe, Madura Garments and several other hundreds of Enterprises are our customers.
- Our vision is to use technology to unlock the hidden value in payments, helping large and small businesses constantly find new and unique ways to grow faster with lesser efforts.
- We believe payment transactions are more than money moving pipes. Whenever a payment happens, it is not only the merchant who makes money, but also the brand whose product was sold and the bank whose payment instrument was used. A merchant, brand, and bank talk to the same consumer, however there is no easy way for them to talk to each other. Our technology makes this collaboration happen, and makes it happen at the point of payment. We help them share customers and share marketing budgets, in turn targeting customers better than possible otherwise. By making it happen at the point of payment we ensure for each party that their marketing efforts are translating into sales.
- Today our technology has helped large businesses grow faster with lesser efforts. They have been able to offer EMI, BNPL, Cashbacks, Loyalty Point redemption and other offers with merchants, brands and banks participating in it. Every day new use cases are designed and stitched using our technology platform.
- Our next target are small merchants. For them, the problems are far larger and more complex. They have no means to access the marketing budgets or customers of large brands or banks. They cannot even cross the regional offices of these businesses. This is where we come in. We are bringing the power of the technology created for large merchants to these businesses. Helping bring the power of large brands and banks with whom we already partner to small merchants.
- Innoviti’s first target category in this segment is small merchants selling electronic goods – mobiles, durables, laptops etc. They are struggling the most as their customers find attractive offers online through partner brands and banks, which they can’t match in their stores. These businesses offer the advantage of touch, feel and explanation in local language. What they lack is the technology to discover, access customers and attract them with superior offers.
- GENIE, our smart marketing platform helps them do that. It is a mobile app that integrates with bank and brands on one end and Google and Facebook on the other. It not only helps them discover and access customers searching for these products online, but also attract them to their stores with offers better than online.
- The app was launched in July 2021 and has shown a 100% month-on-month growth, with more than 23% of the merchant’s monthly sales now happening through it.
- In the next 5 years Innoviti wants to scale up its GTV from $10Bn to $30Bn, multiplying its revenue 10X.
- We want to be recognized in this space as the company that transformed the payments industry by showing ways of extracting value from payment transactions better than anyone else. We want to be setting the path for the next generation of payment solutions, that will be followed by others.
- Towards this we need to hire aggressive talented individuals, who firmly believe that no problem is unsolvable. That there is always a way to solve the toughest of problems. Those who get excited by solving such tough problems at scale, and seeing the impact of that around them in the market. Those who feel fulfilled when they see that it is making a difference to the lives of people around them.
To drive this roadmap the company is looking for talent who are ready to do what no payments platform has done before and what every payment platform will do thereafter.
Would you like to join this journey?
Job description:
Designation: Database Administrator
Location: Bangalore
Responsibilities
- Provision MySQL instances, both in clustered and non-clustered configurations
- Ensure performance, security, and availability of databases
- Prepare documentations and specifications
- Handle common database procedures, such as upgrade, backup, recovery, migration, etc.
- Profile server resource usage, optimize and tweak as necessary
- Collaborate with other team members and stakeholders
Skills and Qualification:
- Graduate degree in Computer Science, Computer Engineering, or related technical discipline from Tier-1 Institutes
- 4-6 years of proven experience in Database Administration
- Decent experience with recent versions of MySQL
- Understanding of MySQL’s underlying storage engines, such as InnoDB and MyISAM
- Experience with replication configuration in MySQL
- Knowledge of de-facto standards and best practices in MySQL
- Proficient in writing and optimizing SQL statements
- Knowledge of MySQL features, such as its event scheduler
- Ability to plan resource requirements from high level specifications
- Familiarity with other SQL/NoSQL databases such as PostgreSQL, MongoDB, etc.
- Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases
Personality:
The ideal person for this role would be someone who loves the challenge of an entrepreneurial environment, who has high ownership to be available 24x7 for our customers, who is able to deal with complexity and rapid change and who has large dreams to be part of interesting journey.
The person would be self-driven, results-oriented with a positive outlook and impeccable integrity. He/she would have a track record of delivering results consistently in uncertain environments. Excellent communication skills with an ability to manage crucial conversations with senior stakeholders.
Retail industry demands “I’ll do it now instead of tomorrow” attitude. Please be prepared for interesting journey if you want to grow fast with no age barriers.
As a Front End Developer, you will be responsible for implementing visual elements that are visible from the computer users vantage point within a web application. You will combine the art of design with the science of programming. You will be responsible for the translation of UI/UX design wireframes to actual code. There are times you will be expected to work independently to meet tight deadlines following design guidelines.
An ideal candidate will have a relevant Engineer Degree and have a minimum of 3 years of experience in a similar role. You will have a good understanding of SEO and be expert level with Git or other version control tool. Additional proficiency with programming languages and ability to work independently are key for this role.
Skills and Expectations
We are looking for terrific JavaScriptFull Stack Engineers who can contribute in all the aspects ofan applicationdevelopment.You will be primarily working with the product team of ORMAE so we expect you to lead with a vision.
•Strong experience in building smooth UI/UX workflows to satisfy the business requirements.
•Should be familiar with logging and monitoring tools like Loki/EFK, Prometheus, Sentry, Grafana.
•Strong knowledge in NodeJS, Functionalprogramming, SDLC (Software development Life cycle). Should be able to write both synchronous and asynchronous code using NodeJS.
•Must be proficient in GIT.
•Experience is creating CI/CD pipelines.
•Experience with working on Linux based servers.
•Should have experience with both SQL and NoSQL databases.Experience with in-memory databases like Redis is a plus.
•Should have Angular and Reactknowledge. Experience with web workers and building drag-drop web interfaces is a bonus and calls for extra points.
•Shouldbe able to designa database schema for any given problem statement
.•Hands on experience in developing serverless architecture is a must.
•Knowledge in deployment using Docker, Docker Swarm, Kubernetesand how containerized applications work is a must.
•Experience in dealing with third party API’s.
Spark / Scala experience should be more than 2 years.
Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer.
Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred
Complete SDLC process and Agile Methodology (Scrum)
Version control / Git





