
- Minimum 6+ years experienced in .Net Core 3.0/MS.NET & Azure
- Should have advanced knowledge of C# programming, ASP.Net Web API, REST Framework, MVC, MVVM architecture
- Understanding of Agile scrum, Project Management methodologies
- Should be able to understand high level technical architecture and develop low-level design
- Should be able to lead a team of technical resources as per sprint delivery targets
- Strong understanding of design patterns and advanced code review, debugging
- Experience in front-end technologies like HTML5, CSS, JavaScript is required.
- Good experience in Angular/React is good to have
- Good communication skills, quick learner with professional attitude and a team player
- Good customer facing skills and experience in working with other offshore/onsite teams

About Menlo Technologies
About
Connect with the team
Similar jobs


Title: Senior Software Engineer – Python (Remote: Africa, India, Portugal)
Experience: 9 to 12 Years
INR : 40 LPA - 50 LPA
Location Requirement: Candidates must be based in Africa, India, or Portugal. Applicants outside these regions will not be considered.
Must-Have Qualifications:
- 8+ years in software development with expertise in Python
- kubernetes is important
- Strong understanding of async frameworks (e.g., asyncio)
- Experience with FastAPI, Flask, or Django for microservices
- Proficiency with Docker and Kubernetes/AWS ECS
- Familiarity with AWS, Azure, or GCP and IaC tools (CDK, Terraform)
- Knowledge of SQL and NoSQL databases (PostgreSQL, Cassandra, DynamoDB)
- Exposure to GenAI tools and LLM APIs (e.g., LangChain)
- CI/CD and DevOps best practices
- Strong communication and mentorship skills
Job Summary:
The Senior Forensic Analyst has strong technical skills and an eagerness to lead projects and work with our clients. Apply Incident Response, forensics, log analysis, and malware triage skills to solve complex intrusion cases at organizations around the world. Our consultants must be comfortable working in teams to tackle challenging projects, communicating with clients, and creating and presenting high-quality deliverables.
ROLES AND RESPONSIBILITIES
· Investigate breaches leveraging forensics tools including Encase, FTK, X-Ways, SIFT, Splunk, and custom investigation tools to determine the source of compromises and malicious activity that occurred in client environments. The candidate should be able to perform forensic analysis on:
· Host-based such as Windows, Linux, and Mac OS X
· Firewall, web, database, and other log sources to identify evidence and artifacts of malicious and compromised activity.
· Cloud-based platforms such as Office 365, Google, Azure, AWS…etc
· Perform analysis on identified malicious artifacts
· Contribute to the curation of threat intelligence related to breach investigations
· Excellent verbal and written communication and experience presenting technical findings to a wide audience of varying technical expertise
· Be responsible for integrity in analysis, quality in client deliverables, as well as gathering caseload intelligence.
· Responsible for developing the forensic report for breach investigations related to ransomware, data theft, and other misconduct investigations.
· Must also be able to manage multiple projects daily.
· Manage junior analysts and/or external consultants providing investigative support
· Act as the most senior forensic analyst, assisting staff, provide a review of all forensic work product to ensure consistency and accuracy, and support based on workload or complexity of matters
· Ability to analyze workflow, processes, tools, and procedures to create further efficiency in forensic investigations
· Ability to work greater than 40 hours per week as needed DISCLAIMER The above statements are intended to describe the general nature and level of work being performed. They are not intended to be an exhaustive list of all responsibilities, duties, and skills required personnel so classified.
SKILLS AND KNOWLEDGE
· Proficient with host-based forensics, network forensics, malware analysis, and data breach response
· Experienced with EnCase, Axiom, X-Ways, FTK, SIFT, ELK, Redline, Volatility, and open-source forensic tools
· Experience with common scripting or programming language, including Perl, Python, Bash, or PowerShell Role Description Senior Forensic Analyst
JOB REQUIREMENTS
· Must have at least 5+ years of incident response or digital forensics experience with a passion for cybersecurity
· Consulting experience preferred.
WORK ENVIRONMENT
While performing the responsibilities of this position, the work environment characteristics listed below are representative of the environment the employee will encounter: Usual office working conditions. Reasonable accommodations may be made to enable people with disabilities to perform the essential functions of this job.
PHYSICAL DEMANDS
· No physical exertion is required.
· Travel within or outside of the state.
· Light work: Exerting up to 20 pounds of force occasionally, and/or up-to 10 pounds of force as frequently as needed to move objects.


About the Role:
- We are looking for a highly skilled and experienced Senior Python Developer to join our dynamic team based in Manyata Tech Park, Bangalore. The ideal candidate will have a strong background in Python development, object-oriented programming, and cloud-based application development. You will be responsible for designing, developing, and maintaining scalable backend systems using modern frameworks and tools.
- This role is hybrid, with a strong emphasis on working from the office to collaborate effectively with cross-functional teams.
Key Responsibilities:
- Design, develop, test, and maintain backend services using Python.
- Develop RESTful APIs and ensure their performance, responsiveness, and scalability.
- Work with popular Python frameworks such as Django or Flask for rapid development.
- Integrate and work with cloud platforms (AWS, Azure, GCP or similar).
- Collaborate with front-end developers and other team members to establish objectives and design cohesive code.
- Apply object-oriented programming principles to solve real-world problems efficiently.
- Implement and support event-driven architectures where applicable.
- Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues.
- Write clean, maintainable, and reusable code with proper documentation.
- Contribute to system architecture and code review processes.
Required Skills and Qualifications:
- Minimum of 5 years of hands-on experience in Python development.
- Strong understanding of Object-Oriented Programming (OOP) and Data Structures.
- Proficiency in building and consuming REST APIs.
- Experience working with at least one cloud platform such as AWS, Azure, or Google Cloud Platform.
- Hands-on experience with Python frameworks like Django, Flask, or similar.
- Familiarity with event-driven programming and asynchronous processing.
- Excellent problem-solving, debugging, and troubleshooting skills.
- Strong communication and collaboration abilities to work effectively in a team environment.
Responsibilities:
Develop and maintain high-quality, scalable, and efficient Java codebase for our ad-serving platform.
Collaborate with cross-functional teams including product managers, designers, and other developers to
understand requirements and translate them into technical solutions.
Design and implement new features and functionalities in the ad-serving system, focusing on performance
optimization and reliability.
Troubleshoot and debug complex issues in the ad server environment, providing timely resolutions to ensure
uninterrupted service.
Conduct code reviews, provide constructive feedback, and enforce coding best practices to maintain code quality
and consistency across the platform.
Stay updated with emerging technologies and industry trends in ad serving and digital advertising, and integrate
relevant innovations into our platform.
Work closely with DevOps and infrastructure teams to deploy and maintain the ad-serving platform in a cloud- based environment.
Collaborate with stakeholders to gather requirements, define technical specifications, and estimate development
efforts for new projects and features.
Mentor junior developers, sharing knowledge and best practices to foster a culture of continuous learning and
improvement within the development team.
Participate in on-call rotations and provide support for production issues as needed, ensuring maximum uptime
and reliability of the ad-serving platform.
· Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product.
· BlueYonder is seeking a Senior/Principal Architect in the Data Services department (under Luminate Platform ) to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services.
· This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India.
· Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment
· The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools
Our current technical environment:
· Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake
· • Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture
· • Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD)
· Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite
Job Description - 221135
Cloudera is looking for a highly experienced software engineer with strong expertise in Java development and a specialty in platform architecture to join the Cloudera Lens team.
Cloudera Lens is a high-fidelity, context-rich, and fully correlated self-service observability & optimization tool that analyzes the state and wellness of a customer’s environments and empowers them to proactively discover and address unknown unknowns in their data, scale operations without compromising on performance or costs, and expedite remediation of issues.
As a Java engineer, you will be working in a team of engineers led by an Engineering Manager, collaborating with other engineers and stakeholders in India, United States, and other countries around the globe.
Responsibilities:
- Lead, architecture, design, and implementation of key aspects of the Cloudera Lens data collection, data analytics, data correlations, and recommendations.
- Work with product management, engineering, UX, and documentation teams to deliver high-quality products.
- Interact with partners and customers to help define roadmap and shape the technology.
- Empower team members to deliver high-quality software at a fast pace.
Requirements:
- Proven track record of performance.
- Passionate about software engineering. Clean coding habits, attention to detail, and focus on quality and testability.
- Strong software engineering skills: object-oriented design, data structures, algorithms.
- Experience with containerization orchestration technologies: Kubernetes, Docker.
- Deep knowledge of system architecture, including process, memory, storage, and network management is highly desired.
- Experience with the following: Java, concurrent programming, and related areas.
- Experience with Java memory management, performance tuning and scaling
- Experience in building horizontally scalable products handling multi-terabyte datasets is desirable.
- Experience with relational and non-relational databases: PostgreSQL, Amazon S3.
- Strong oral and written communication skills in English.
Advantageous To Have:
- Experience in building enterprise-grade cloud products.
- Experience with building/using cross-functional observability products.
- BS or MS in computer science.
- Cloud experience: AWS, Azure, GCP.
- Python, Linux, Micro Services experience.
What You'll do:
- 4+ years of experience building scalable backends using Nodejs
- In-depth knowledge of any framework of Nodejs (i.e. Express, Hapi, Koa.js)
- Hands on experience with developing REST APIs using Node.js and any of the above framework
- Should have experience with Socket.IO
- Familiar with oauth integration Social Networking API (Facebook, Twitter, Linked In, Google+)
- Good understanding of standard authentication systems such as OAuth2 and JWT
- Knowledge of server-side templating (e.g. Jade, Handlebars.js, etc.)
- Should have understanding of model, caching, async mechanisms
- Hands-on experience with implementing Role based User authentication and authorization system
- Understanding of caching, database interactions, middlewares
- Able to engineer the best performing solutions and always keeping scalability in mind
What makes you a great fit:
- Strong problem solving skills
- Knowledge of data structures and algorithms
- Hungry for more responsibility and knowledge
- Passion for building robust systems that are engineered to handle failure scenarios, an undying love and attitude for maintaining coding standards
- Strong advocate for producing quality software who makes sure issues are raised and resolved
- Experience with at least one of the cloud platforms like AWS, GCP, Azure, Digital Ocean etc. (docker, Kubernetes, microservice good to have)
Must have skillsets:
Skills : NodeJs
Experience required: + years
Job Type: Full time/ Permanent
Perks and Benefits :
- 5 days working.
- Flexible shift timings
- Company-sponsored certifications.
- Team friendly culture
- Flat hierarchy
- Carrom, Table tennis games, Cricket Tournament Participation for interested employees
- Snack-filled pantry for team members
- Group Medical Insurance (*)
Infrastructure
Pocket Gems wants to build the greatest games and interactive entertainment in the world.
That’s the mission our founders began within an apartment above a pizza shop back in 2009
and we continue it today.
Pocket Gems has grown to over 250 people in San Francisco. With $155 million in backing from
Sequoia Capital and Tencent, we’re constantly breaking new ground with graphically rich mobile
games, fun new genres of mobile entertainment, and innovative technologies like our mobilefirst Mantis Engine.
Our products have been downloaded over 325 million times by players around the world. We
have several flagship products including the most recent - Adventure Chef Merge Explorer, a
casual merge and explore game. Some of our other ongoing hits include Episode, a mobile
storytelling network and platform, and War Dragons, a visually stunning 3D real-time strategy
game.
Pocket Gems is home to some of the most massive and delightful mobile-first games, like War
Dragons and Episode. Those games need a solid backend platform, to function critical tasks to
delight our players, and is supported and optimized by our Central Engineering team. As a Sr.
Software Engineer on the Central Infrastructure Team, you will build microservices that acts as
the core of all our games, facilitate the processing and recording of billions of events per day,
support critical systems for marketing, finance. You will be responsible for some of our biggest
projects as you build APIs and infrastructure that scales to millions of players in real-time
games.
What You’ll Do:
• Implement flexible, reusable, and scalable solutions to improve our data pipeline
• Develop microservices for critical infrastructure like A/B tests, Offer recommendation, etc
that is mission-critical to the business
• Develop microservices for our games such as real-time communication platforms,
leaderboard, etc
• Build and maintain integrations with third-party APIs that you suggest or write yourself
• Build scalable web tools (including open source tools) to support data analysis and
visualization for the company and influence what we build for our games’ players
What You Bring to the Central Infrastructure Engineering team:
• Minimum of 7 years of professional experience (including 3+ years backend experience)
• A degree in computer science, engineering, or relevant field
• Experience leading complex projects, preferably involving distributed systems
• Deep experience with AWS, GCP, or Hadoop, and related backend technologies is a
plus
• Strong skills in data structures, algorithms, software design, and OOP
• A love for delighting users, both internal and external, with reliable tools, data, and
creative and highly technical solutions to their problems
Extra Gems for:
• Experience in working with microservices

Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team


