11+ Proxies Jobs in Chennai | Proxies Job openings in Chennai
Apply to 11+ Proxies Jobs in Chennai on CutShort.io. Explore the latest Proxies Job opportunities across top companies like Google, Amazon & Adobe.
- Must have 6+ years of experience in C/C++ programming language.
- Knowledge of Go programming language and Python programming language is a big plus.
- Strong background in L4-L7 Internet Protocols TCP, HTTP, HTTP2, GRPC and HTTPS/SSL/TLS.
- Background in Internet security related products such as Web Application Firewalls, API Security Gateways, Reverse Proxies and Forward Proxies
- Proven knowledge of Linux kernel internals (process scheduler, memory management, etc.)
- Experience with eBPF is a plus.
- Hands-on experience in cloud architectures (SaaS, PaaS, IaaS, distributed systems) with continuous delivery
- Familiar with containerization solutions like Docker/Kubernetes etc.
- Familiar with serverless technologies such as AWS Lambda.
- Exposure to machine learning technologies and distributed systems is a plus
- B.E/B.Tech/MS degree in Computer Science, or equivalent
SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Roles and Responsibilities
1. Manage client relationships, and facilitators relationship at MHFAI for training and post-training support.
2. Ideate, design, and implement program that focused on post-training support for mental health first-aiders.
3. Leverage the new products of MHFAI to the clients that ensures implementation of mental health by building tools and guidelines (for organized groups).
4. Oversee and facilitate MHFAI instructors for training and upskilling.
5. Monitor and coordinate the MHFAI Awards program with all the stakeholders involved.
6. Draft and initiate scientific research, and create tools required at workplace to measure mental health.
7. Plan and manage webinars, events and activities to promote the idea of mental health at workplaces and educational institutions.
8. Draft, design, manage and implement guidelines / resources with focus on mental health for the benefit of organized groups.
9. Keep up-to-date with the latest trends and best practices in Learning and Development and / or mental health, and incorporate them into our training programs.
10. Manage the Learning and Development budget, ensuring that resources are allocated effectively and efficiently.
11. Provide support in other operational functions of MHFA as needed.
Requirements:
· 10-15 years in clinical work and / or mental health training
· Qualification: Education in Mental Health
· Experience in Clinical Practice / Research
· Exposure to mental health training and created content or training on Mental Health for Adult
· Experience of having previously worked in organized group is desirable
● Generate loan leads from the partnered institutions, open market and through different channels on the field.
● Develop and maintain relationships with partnered institutions and repeat business
and referral/s.
● Arrange and plan events to generate leads, and handle product queries and
service issues in the partnered institutions.
● Meet clients, verify documents, process files, coordinate for sanction /
disbursement of loans, personalized service to clients.
● Ensure the achievement of a given business target in your territory.
Experience: 1year+ in Banking sales such as personal loans, and home loans.
Experience: 1 – 2.5 years
Job Location: Chennai, Bangalore & Mumbai
Roles & Responsibilities:
- Proven working experience in Android Native development
- Excellent coding skills in Java or Kotlin or both.
- Should have knowledge in SQL and SQLite Database.
- Strong knowledge in User Interface designing using latest frameworks.
- Strong knowledge of Android UI design principles, patterns, and best practices.
- Improving Application performance.
- Enterprise application development experience
- Ability to understand business requirements and translate them into technical requirements.
- Experience with third-party libraries and APIs
- Working knowledge of the general mobile landscape, architectures, trends, and emerging technologies
- Continuously discover, evaluate, and implement new technologies to maximize development efficiency
Educational Requirements:
BE/B. Tech, BBA, MBA
General Knowledge, Skills & Abilities:
- Ability to work in a fast-paced environment, on multiple projects concurrently.
- Track record of timelines, milestones and deliverable.
- Ability to interact effectively with executive level clients.
- Excellent oral/written communication skills. Effective analytical, problem-solving. interpersonal and time management skills.
- Ability to mitigate pressure and risk.
THE IDEAL CANDIDATE WILL
- Engage with executive level stakeholders from client's team to translate business problems to high level solution approach
- Partner closely with practice, and technical teams to craft well-structured comprehensive proposals/ RFP responses clearly highlighting Tredence’s competitive strengths relevant to Client's selection criteria
- Actively explore the client’s business and formulate solution ideas that can improve process efficiency and cut cost, or achieve growth/revenue/profitability targets faster
- Work hands-on across various MLOps problems and provide thought leadership
- Grow and manage large teams with diverse skillsets
- Collaborate, coach, and learn with a growing team of experienced Machine Learning Engineers and Data Scientists
ELIGIBILITY CRITERIA
- BE/BTech/MTech (Specialization/courses in ML/DS)
- At-least 7+ years of Consulting services delivery experience
- Very strong problem-solving skills & work ethics
- Possesses strong analytical/logical thinking, storyboarding and executive communication skills
- 5+ years of experience in Python/R, SQL
- 5+ years of experience in NLP algorithms, Regression & Classification Modelling, Time Series Forecasting
- Hands on work experience in DevOps
- Should have good knowledge in different deployment type like PaaS, SaaS, IaaS
- Exposure on cloud technologies like Azure, AWS or GCP
- Knowledge in python and packages for data analysis (scikit-learn, scipy, numpy, pandas, matplotlib).
- Knowledge of Deep Learning frameworks: Keras, Tensorflow, PyTorch, etc
- Experience with one or more Container-ecosystem (Docker, Kubernetes)
- Experience in building orchestration pipeline to convert plain python models into a deployable API/RESTful endpoint.
- Good understanding of OOP & Data Structures concepts
Nice to Have:
- Exposure to deployment strategies like: Blue/Green, Canary, AB Testing, Multi-arm Bandit
- Experience in Helm is a plus
- Strong understanding of data infrastructure, data warehouse, or data engineering
You can expect to –
- Work with world’ biggest retailers and help them solve some of their most critical problems. Tredence is a preferred analytics vendor for some of the largest Retailers across the globe
- Create multi-million Dollar business opportunities by leveraging impact mindset, cutting edge solutions and industry best practices.
- Work in a diverse environment that keeps evolving
- Hone your entrepreneurial skills as you contribute to growth of the organization
Job Description
The selected candidate should have minimum 18+ years of delivering SAP projects and min 3+ years of leadership role experience. This role will report to SAP Practice Delivery head and take care of SAP delivery for a market unit at Wipro. A Key member of the practice delivery leadership team for delivering all SAP technology solutions ensuring a sustainable and profitable growth with right level of customer satisfaction.
Responsibilities:
- Customer stakeholder management
- Collaborate with sales and pre-sales team to ensure a deliverable solution offerings with right price
- Manage all SAP projects & associated processes
- To manage and engage a large and diverse team and partners
- Manage SAP practice operations including workforce management
- Continues improvement & innovation in delivery models
- SAP capability building and workforce transformation
- Spearheading revenue portfolio of $100M+ (Must)
- Handling FTE of 1000+ resources (Must)
Responsibilities
- Work with Sales Engineering, Solution Architect and Services teams to ideate software solutions
- Work closely with product management to extend existing software to fit geo & domain specific needs
- Design client-side and server-side architecture
- Build the front-end of applications through appealing visual design
- Develop and manage well-functioning databases and applications
- Write effective APIs
- Test software to ensure responsiveness and efficiency
- Troubleshoot, debug and upgrade software
- Create security and data protection settings
- Build features and applications with a mobile responsive design
- Write technical documentation
- Work with data scientists and analysts to improve software
Requirements
Proven experience of 4 to 7 years as a Software Engineer
Experience developing cloud SaaS solutions
Knowledge of multiple front-end languages and libraries (e.g., HTML/ CSS, JavaScript, XML, jQuery)
Strong knowledge of multiple back-end languages (e.g., C#, Java, Python) and JavaScript frameworks (e.g., Angular, React, Node.js)
Development in Core Java, J2EE, Struts, spring, Client-Side scripting, Hibernate, Database
Strong experience with Spring Boot stack (spring cloud, spring-data)
Extensive experience in developing and consuming REST APIs
Experience in Kafka distributed messaging is preferred
Hands-on experience in Redis, Apache Ignite, Hazelcast
Familiarity with databases (e.g. MySQL, MongoDB), web servers (e.g. Apache) and UI/UX design
Experience working in an Agile methodology
Strong communication skills
Experience in speech technologies is a plus
Experience in building real time solutions is a plus
Degree in Computer Science, Mathematics, or similar field
DevOps Engineer
Job Description:
The position requires a broad set of technical and interpersonal skills that includes deployment technologies, monitoring and scripting from networking to infrastructure. Well versed in troubleshooting Prod issues and should be able to drive till the RCA.
Skills:
- Manage VMs across multiple datacenters and AWS to support dev/test and production workloads.
- Strong hands-on over Ansible is preferred
- Strong knowledge and hands-on experience in Kubernetes Architecture and administration.
- Should have core knowledge in Linux and System operations.
- Proactively and reactively resolve incidents as escalated from monitoring solutions and end users.
- Conduct and automate audits for network and systems infrastructure.
- Do software deployments, per documented processes, with no impact to customers.
- Follow existing devops processes while having flexibility to create and tweak processes to gain efficiency.
- Troubleshoot connectivity problems across network, systems or applications.
- Follow security guidelines, both policy and technical to protect our customers.
- Ability to automate recurring tasks to increase velocity and quality.
- Should have worked on any one of the Database (Postgres/Mongo/Cockroach/Cassandra)
- Should have knowledge and hands-on experience in managing ELK clusters.
- Scripting Knowledge in Shell/Python is added advantage.
- Hands-on Experience over K8s based Microservice Architecture is added advantage.
Job Description:
We are looking to hire a skilled Desktop Support Engineer to assist our clients with computer hardware and software issues. Addressing and resolving sudden specific problems, they may run regular tests and monitor computer systems to prevent the problems from occurring.
The candidate should have:
- Troubleshooting hardware and software issues.
- Installing and maintaining hardware and computer peripherals.
- Installing and upgrading operating systems and computer software.
- Troubleshooting networking and connection issues.
- Advising on software or hardware upgrades.
- Providing basic training in computer operation and management.
- Completing job reports and ordering supplies.



