
About Insane
Charlie Chaplin, Steve Jobs, Oprah, Pythagoras, Nikola Tesla, and Einstein were crazy enough to believe that life is bigger. They believed that we are here to make a difference. To make the world a better place.
And we are Insane enough to believe that. We are here to take humanity forward. We believe in change. We believe that would need to be better every day.
We are changing this world 1 step at a time.
Starting with the fundamentals of growth - Knowledge. We became carriers of change in the education industry. We are on a mission ‘to accelerate the impact of coaches in the world’.
About the role
We are looking to hire a dedicated Direct Response Copywriter Writer to create strong communication for all the stages of the Customer Journey. The copywriter will be expected to create stories that resonate with the audience and move them to action. The copywriter will be responsible for producing video scripts, Landing pages, Facebook & Google ads, Emails, and much more.
The copywriter will be expected to be data-driven. They will be expected to analyze data to see what is working and what’s not. This individual will be responsible for conducting audience research to understand the target market and then create communication strategies that resonate.
Content for video scripts, landing pages, Facebook and Google ads, and emails. Responsibilities include evaluating analytics to adjust content as needed.
Your day-to-day:
- In-depth audience research to understand the audience on industry-related topics in order to develop original content
- Writing copy for all the stages of the customer journey
- Measuring the impact of the copy through data and improving on a consistent basis
- All completed layouts and artwork must be thoroughly checked to ensure there are no mistakes
- Collaborate with internal partners to interpret project briefs and develop relevant concepts into content
Requirements
- Top-class English communication skills
- Ability to understand the audience to find out what motivates them.
- Ability to think conceptually
- An understanding of user experience
- Proven experience as a copywriter or related role
- Knowledge of online content strategy and creation
About you
- You’re numbers-driven.
- You love data.
- You're passionate and driven.
- You have a sharp eye for detail.
- You're proactive and reactive.
- You have 2+ years in Direct Response Copywriting

About Insane Marketers
About
Connect with the team
Similar jobs
Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data
OVERVIEW
We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.
The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.
SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.
Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.
Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.
WHAT YOU WILL BUILD
Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.
Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.
Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.
Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.
Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.
DOMAIN EXPERIENCE
Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.
Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.
High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.
ENGINEERING STANDARDS
Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.
Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.
Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.
Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.
TECHNICAL ENVIRONMENT
PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.
WHAT WE ARE LOOKING FOR
Attention to Detail: You notice when something is slightly off and investigate rather than ignore.
Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.
Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.
Long-Term Orientation: You build systems you will maintain for years.
Communication: You document clearly, explain data issues to non-engineers, and surface problems early.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Role Overview
This is a 20% technical, 80% non-technical role designed for individuals who can blend technical know-how with strong operational and communication skills. You’ll be the bridge between our product and the client’s operations team.
Key Responsibilities
- Collaborate with clients to co-design SOPs for resolving support queries across channels (chat, ticket, voice)
- Scope and plan each integration: gather technical and operational requirements and convert them into an executable timeline with measurable success metrics (e.g., coverage %, accuracy, CSAT)
- Lead integration rollouts and post-launch success loops: monitor performance, debug issues, fine-tune prompts and workflows
- Conduct quarterly “AI health-checks” and continuously improve system effectiveness
- Troubleshoot production issues, replicate bugs, ship patches, and write clear root-cause analyses (RCAs)
- Act as the customer’s voice internally, channel key insights to product and engineering teams
Must-Have Qualifications
- Engineering degree is a must; Computer Science preferred
- Past experience in coding and a sound understanding of APIs is preferred
- Ability to communicate clearly with both technical and non-technical stakeholders
- Experience working in SaaS, customer success, implementation, or operations roles
- Analytical mindset with the ability to make data-driven decisions
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
- Responsible for overall AOP target achievement.
- Responsible for leading a team involved in direct Fixed Battery\2W sales.
- Driving BTL and local sales activities through team.
- Travelling across locations to focus on opportunities for Sales and Growth.
- Hiring, Leading, Managing and motivating Sales team to drive growth numbers.
- A Data and MLOps Engineering lead that has a good understanding of modern Data engineering frameworks with a focus on Microsoft Azure and Azure Machine Learning and its development lifecycle and DevOps.
- Aims to solve the problems encountered when turning Data into meaningful solutions using transformations and data science code into production Machine Learning systems. Some of these challenges include:
- ML orchestration - how can I automate my ML workflows across multiple environments
- Scalability - how can I take advantage of the huge computational power available in the cloud?
- Serving - how can I make my ML models available to make predictions reliably when needed?
- Monitoring - how can I effectively monitor my ML system in production to ensure reliability? Not just system metrics, but also get insight into how my models are performing over time
- Reuse – how can I profess reuse of artefacts built and establish templates and patterns?
The MLOps team works closely with ML Engineering and DevOps teams. Rather than focus just on individual use cases, the focus would be to specialise in building the platforms and tools that can help adoption of MLOps across the organisation and develop best practices and ways of working to develop a state of the art MLOps capability.
A good understanding of AI/Machine Learning and software engineering best practices such as Cloud Engineering, Infrastructure-as-Code, and CI/CD.
Have excellent communication and consulting skills, while delivering innovative AI solutions on Azure.
Responsibilities will include:
- Building state-of-the-art MLOps platforms and tooling to help adoption of MLOps across organization
- Designing cloud ML architectures and provide a roadmap for flexible patterns
- Optimizing solutions for performance and scalability
- Leading and driving the evolving best practices for MLOps
- Helping to showcase expertise and leadership in this field
Tech stack
These are some of the tools and technologies that we use day to day. Key to success will be attitude and aptitude with a vision to build the next big thing in AI/ML field.
- Python - including poetry for dependency management, pytest for automated testing and fastapi for building APIs
- Microsoft Azure Platform - primarily focused on Databricks, Azure ML
- Containers
- CI/CD – Azure DevOps
- Strong programming skills in Python
- Solid understanding of cloud concepts
- Demonstrable interest in Machine Learning
- Understanding of IaC and CI/CD concepts
- Strong communication and presentation skills.
Remuneration: Best in the industry
Connect: https://www.linkedin.com/in/shweta-gupta-a361511
- Minimum 2years working experience in Application development using Angular 5/6.
- Strong proficiency in JavaScript and the JavaScript object model, JS MVC frameworks, redux such as Angular5/6, State Management With NgRx(Required).
- RestAPI Integration via Http,
- ThirdParty Integrations (Angular Material or PrimeNG),
- Lazyloading ( custom preloading Strategy ),
- Validation,
- Reactive Architecture ( RxJs Observables & Operators )
- Unit Testing (*not mandatory but would be a plus),
Responsibilities and Duties
- Be part of a full team delivering a complete front-end application
- Ensuring high performance on mobile and desktop
- Cooperating with the back-end developer in the process of building the RESTful API
- Develop/Design, implement, and test high-quality web applications in a cloud-based environment.
- Help brainstorm and plan new applications and web services.
- You will take ownership of technical problems and their resolution, including proactively
- Communicating with product managers, developers, architects, and the operations team.
- Provide accurate effort-estimates for deliverables.
- Be committed to the deadlines.










