
Company Description
Gencosys Technologies Pvt. Ltd. is a leading Information Technology company with a strong presence and customer base in South Asia, Middle East and Africa, Asia Pacific, Kingdom of Saudi Arabia, and North America. We provide comprehensive IT support and solutions to businesses in various industries.
Role Description
This is a full-time on-site role for a Search Engine Optimization Specialist at our Lucknow location. The Specialist will be responsible for conducting keyword research, performing SEO audits, building links, analyzing web analytics, and implementing on-page SEO techniques to improve website rankings and visibility.
Qualifications
- Keyword Research and SEO Audits
- Link Building and Web Analytics
- On-Page SEO
- Experience in optimizing websites for search engines
- Understanding of SEO best practices and industry trends
- Proficiency in web analytics tools and SEO software
- Familiarity with HTML, CSS, and JavaScript
- Excellent analytical and problem-solving skills
- Strong communication and collaboration abilities
- Bachelor's degree in Marketing, Computer Science, or related field

About Gencosys
About
Similar jobs
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
About the Role
We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.
The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.
Key Responsibilities
- AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
- Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
- Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
- Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
- Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
- Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
- Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
- Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
- Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
- Proficiency in Python and/or other scripting languages for automation.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
- Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
- Knowledge of data governance, model drift detection, and compliance in AI systems.
- Excellent problem-solving, communication, and collaboration skills.
Nice-to-Have
- Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
- Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
- Contributions to open-source MLOps/AI Ops tools or platforms.
- Exposure to Responsible AI practices, model fairness, and explainability frameworks
Why Join Us
- Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
- Work alongside leading data scientists and engineers on cutting-edge AI solutions.
- Competitive compensation, benefits, and career growth opportunities.
Experience - 1 to 2 years
Budget - As per Industry norms
Work Type - Onsite / Full time
Candidate must have experience in Digital marketing & Advertising agency
Job description:
The ideal candidate will have strong creative skills and a portfolio of work which demonstrates their passion for design.
Responsibilities:
- Collaborate with the team to ensure consistency of designs across various media outlets
- Create strategic and effective branding and marketing collateral both print and digital
- Maintain awareness of current industry and technology standards, social media, competitive landscape and market trends
Job Title: Business Development Executive.
Location: HSR Layout, 6th sector, Bangalore
Company: A2M Technologies Pvt Ltd.
Job Type : Field Work.
Experience : Both Freshers and Experienced Candidates.
Salary Range : Up to 4.5 LPA.
Job Summary:
The Field Sales Executive will be responsible for driving sales growth through direct interactions with potential clients. The role involves prospecting, engaging with new and existing customers, and building long-term relationships to promote A2M Technologies' products and services.
Key Responsibilities:
Lead Generation & Prospecting: Identify potential clients through field visits, cold calling, and networking.
Sales Presentations: Present A2M Technologies’ products and services to prospective clients, highlighting their value proposition.
Client Relationship Management: Build and maintain strong relationships with both new and existing clients.
Negotiation & Closing: Negotiate terms of agreements, close sales deals, and ensure customer satisfaction.
Market Research: Stay updated on market trends, competitor activities, and customer needs to adjust strategies accordingly.
Achieve Sales Targets: Meet or exceed monthly and quarterly sales targets set by the management.
Sales Reporting: Provide daily/weekly sales reports and updates to the sales manager.
Product Knowledge: Maintain a deep understanding of the products and services offered by A2M Technologies.
After-Sales Support: Ensure timely follow-up with client's post-sales and resolve any issues related to the product or service.
*Cross-functional Collaboration: * Work closely with other departments like marketing, customer service, and technical teams to ensure seamless delivery to clients.
Key Qualifications:
Education: Bachelor’s degree in business, Marketing, or a related field (preferred).
Experience: 0-3 years of experience in field sales or a similar role.
Skills:
Strong communication and negotiation skills.
Ability to work independently and manage time effectively.
Proficiency in using CRM software and MS Office tools.
A valid driver’s license and willingness to travel frequently.
Benefits:
· Career Growth Opportunities: Emphasis on continuous learning, internal promotions, and opportunities for upward mobility within the organization.
· Performance-based Incentives: Bonuses, rewards, or recognition programs tied to meeting or exceeding performance goals.
· Professional Development: Access to training programs, certifications, and workshops to enhance skills and competencies.
You will be working hands-on on a complex and compound product that has the potential to be used by millions of sales and marketing people around the world. Your contribution to delivering an excellent product platform that:
- enables quick iteration
- supports product customization
- and handles scale
What do we expect you to have?
- 2+ years of experience in backend engineering
- An intent to learn and an urge to build a product by learning different technologies
- Interest in writing complex, scalable, and maintainable backend applications
- Tech stack requirements:
Must haves
- Experience in building application server in Java (Spring / Spring boot) / NodeJS / Golang / Python
- Experience in using SQL databases and designing schemas based on application need
- Experience with container services and runtimes (docker / docker-compose / k8s)
- Experience with cloud paas (AWS / GCP / Azure cloud)
- Experience and familiarity with microservices’ concepts
- Experience with bash scripting
Good to have (Preferred)
- Preferred experience with org wide message queue (rabbitmq / aws sqs)
- Preferred experience with task orchestration services (apache airflow / aws step function)
- Preferred experience with infra as code (or system configuration) tools (terraform / chef / ansible)
- Preferred experience with build essential tools (make / makefile)
- Preferred experience with monitoring and tracing systems for performance / system / application monitoring (grafana + loki + prometheus / aws cloudwatch)
What will you learn?
- Building highly available, complex, compound, performant systems of microservices platform that acts as an API layer
- Industry-standard state-of-the-art tools + methodologies + frameworks + infra for building a product.
- Fable is not a trivial CRUD app. It requires a lot of consideration and care for building the API layer as the product is highly customizable per user.
- How different functions (sales, marketing, product, engineering) in a high-velocity product company work in synergy to deliver an iterative product in real life.
Who would you be working with?
- You would be directly working with the co-founder & CTO who has built multiple companies before and has built large teams in large-scale companies like ThoughtSpot, Unacademy, etc.
Position details
- Fully remote.
- 5 days/week (all public and government holidays will be non-working days).
- No specific work hours (we will sync over zoom over the course of the day).
- You will play a key strategic and consultative role in developing, delivering and maintaining digital ad tech
products
- Work as part of the engineering and product teams to develop complex platforms and systems that are
scalable to millions of users
- Result-oriented full-stack development for world-class products with high degree of performance and
quality
- Developers in our teams are also adept at formulating product strategies, research best practices, bring in
expertise and mentor younger talent and drive the organization’s goals higher with a self-starter attitude
- Strong sense of commitment, problem-solving, professional ethics and willingness to learn new things are a
standard requirement for all our openings.
Requirement:
- Knowledge of digital ad tech ecosystem
- Minimum 2+ years of experience working in a professional environment on full-stack development
- Proven expertise in creating and developing scalable enterprise or B2C web applications
- Experience in REST API development and MVC/MVVM development methodologies
- Deep understanding of the underlying architecture of web servers, load balancing, and browser nuances.
Basically, a solid understanding of fundamentals - including execution models, asynchronous programming,
object-oriented concepts, relational concepts, etc.
- Proficiency in one or more of Node.js/MEAN stack, Python, Java or other frameworks and willing to be
flexible on stack as per the requirement of the project. Most of our current development is on the MEAN
stack - expertise in the stack will be a definite plus
- Proficiency in one or more of databases such as MySQL, MongoDB, Cassandra or other equivalent databases
and willing to be flexible as per the requirement of the project. Most of our current development is on
MongoDB and MySQL - expertise in these will be a definite plus
- Good understanding of developer tools and devops such as Git, Ansible, cloud platforms such as AWS,
Docker, etc.
- Candidates with Github profiles or blogs with demonstrated knowledge, experience and/or contributions to
open source will have an added advantage
Our technology-based client is backed-up by venture capitalist and angel investor. It has onboarded over 1000 sellers so far and has facilitated transactions over 6 crores. Their top clients include CBRE, FedEx, Swiggy etc.
They are currently developing India’s first dedicated B2B Service Procurement Platform.
What you will do:
- Understanding client requirements and pitching the relevant solution
- Active front-end selling and cross-selling to potential buyers over the platform
- Coordinating with the company's partners to close deals
- Delivering daily progress, revenue and activity reports for all clients
- Analyzing research, recommending and implementing processes and systems
- Working with the operations and technology teams to ensure efficient delivery of the project as per the guidelines
- Acquiring outbound clients through social media and networks
- Keeping track of all the documentation required for a transaction
- Helping both buyers and sellers in an ongoing transaction
What you need to have:
- Degree in Engineering, business studies, management, commerce or any related area is preferred
- Experience in B2B sales is preferred
- Good communication skills (both English and Hindi)
- Proficiency in Google suite and other productivity software
- Demonstrated initiative, boldness and flexibility
MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
- Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
- Bring experience on Google Cloud Platform.
- Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
- Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
- Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
- Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
- Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
- Manage several streams of work concurrently
- Understand how various systems work
- Understand how IT operations are managed
What you will bring:
- 5 years of work experience as a DevOps Engineer.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
- Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git.
- Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
*******************
About the Role
The Dremio India team owns the development of the cloud infrastructure and services that power Dremio's Data Lake Engine. With focus on query performance optimization, supporting modern table formats like Iceberg, Deltalake and Nessie, this team provides endless opportunities to to define the products for next generation of data analytics.
In this role, you will get opportunities to impact high performance system software and scalable SaaS services through application of continuous performance management. You will plan, design, automate, execute the runs followed by deep analysis and identification of key performance fixes in collaboration with developers. Open and flexible work culture combined with providing employees ownership of the work they do will help you develop as a leader. The inclusive culture of the company will provide you a platform to bring fresh ideas and innovate.
Responsibilities
- Deliver end to end performance testing independently using agile methodologies
- Prepare performance test plans, load simulators and test harnesses to thoroughly test the products against the approved specifications
- Translate deep insight of architecture, product & usage into an enhanced automated performance measurement & evaluation framework to support continuous performance management.
- Evaluate & apply the latest tools, techniques and research insights to drive improvements into a world-class data analytics engine
- Collaborate with other engineering and customer success functions to simulate customer data and usage patterns, workloads to execute performance runs, identify and fix customer issues and make sure that customers get highly performant, optimized and scalable Dremio experience
- Analyze performance bottlenecks, root cause issues, file defects, follow up with developers, documentation and other teams on the resolution.
- Publish performance benchmark report based on test runs in accordance with industry standards
- Regularly communicate leadership team with an assessment of the performance, scalability, reliability, and robustness of products before they are exposed to customers
- Analyze and debug performance issues in customer environments.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Actively participate in code and design reviews to maintain exceptional quality and deepen your understanding of the system architecture and implementation
Basic Requirements
- B.Tech/M.Tech/Equivalent in Computer Science or a related technical field
- 8+ years of performance automation engineering experience on large scale distributed systems
- Proficiency in any of Java/C++/Python/Go and automation frameworks
- Hands on experience in integration performance automation using CI/CD tools like Jenkins
- Experience in planning and executing performance engineering tasks to completion and taking ownership of performance epics during a set of sprints.
- Experience in designing, implementing, executing and analyzing automated performance tests for complex, production system software.
- Experience in analyzing performance bottlenecks in system, performing root cause analysis, and following issue resolution workflow to tune the system to provide optimized performance
- Ability to derive meaningful insights from the collected performance data, articulate performance findings effectively with senior team members to evaluate design choices.
- Experience with database systems internals, query optimization, understanding and tuning query access plans, and query execution instrumentation.
- Hands on experience of working projects on AWS, Azure and Google Cloud Platform
- Understanding of distributed file systems like S3 or ADLS or HDFS and HIVE
- Ability to create reusable components to automate repeatable, manual activities
- Ability to write technical reports and summary and present to leadership team
- Passion for learning and delivering using latest technologies
- Excellent communication skills and affinity for collaboration and teamwork
- Passion and ability to work in a fast paced and agile development environment.
Preferred Qualification
- Hands on experience of multi-threaded and asynchronous programming models
- Hands on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, storage systems
- 3-5 Years of experience in Backend Development.
- Must have experience in Python (FLASK framework).
- Have Deep understanding of how RESTful APIs work.
- Familiar with various design and architectural patterns that can work at scale.
- Sound knowledge of NoSQL/SQL Databases (Mongo DB preferred).
- Strong experience with at-Cloud technology, preferably AWS or GCP or Azure.
- Core experience in developing complex backend systems.
- Communicating complex technical concepts to both technical and non-technical audiences.
- Passionate about application scalability, availability, reliability, and security.








